Interesting what datacenter is down?

You are exactly right Vadim - and the conversation around GE and the problems with it seem to have fallen on deaf ears at Storj.

i totally agree with @Vadim and @penfold on this.

Iā€™ve done GE for 160 satellites and its only worked less than 30% of the time.

also vetting 500 nodes on a single ip takes forever and is a waste of time, imhoā€¦
basically the vetting can be calculatedā€¦

lets say vetting a single node takes 1 weekā€¦
sure it might not really , but it does for the highly active satellitesā€¦

so it would take 500 weeks for 500 nodes to get the same audits to be vetted.
because vetting is based on audits and audits is on data.

thus the only effect this has is to reduce held amounts and ensure 100% payouts when the nodes are activatedā€¦

is that cheatingā€¦ maybeā€¦ it sure makes the held amount even more irrelevant than it was in the first placeā€¦ held amounts worked pretty okay in the pastā€¦ around the end of the testnet.

but now held amounts are a joke.
one example, i got a 6.5TB node that has 4.89$ in held amount and 5.31$ already paidā€¦

to GE this node i would have to upload 6.5TB or mostlyā€¦ lets say just half of it evenā€¦
so 3.25TB uploaded and i get paid 4.89$ for thatā€¦
thats less than 1/3 the current egress prices.

last time i did a GE it took 60 days to finish, and i ended up having to restructure my network to better handle the load.

so why TheVan even bothers with doing this is a mystery to me.
seems like a waste of time for little to no reward.

1 Like

If you read a bit more into the story he had them on vps systems and then moved them to be behind 1 ip - so likely they are already vetted.

Completely a joke.

Started the first backup node about ~85 weeks ago.

Backup nodes vetting status

Th3Van.dk

1 Like

its true, but we can marginalize it.
And yes:

Because if Your a whale and got decommissioned, post server, 5 years old HDDs, which i donā€™t believe they are for free, but even if, you can sell them, or You can use them with STORJ.
Those are most often enterprise grade HDDs, they can work +5 to 7 years easy.
Normal people havenā€™t got this advantage.
Normal people got some average HDDs for home end user, with a lot less durability by design.
You have to earn enough in the project for the value of the HDD at minimum, before it dies.
it can take 2-3 years of a node operation to cover just costs. Whales can afford that.
Normal home people canā€™t. So in current model, it discourages normal people to join STORJ, coz reward is too low. And whales still finds ways to operate and profit.

This is bad for node decentralization, if you canā€™t join a network with 1 hard drive, you have, say, 6 TB or 4 TB to make sense for you.

Thatā€™s why i proposed a concept, where normal people with average HDDs could get return under 1 year, or even in months! Whaleā€™s will still be present, but this will raise the number of home SNOs joining, and increase decentralization at same time.

A concept, where download of a content is very cheap and STILL nodes earn more than now, and STORJ inc. earns more than now! And STORJ will stop being just a cold storage!
It can make STORJ a global revolution, not just a curiosity, they are afraid to trust. if the price is low enough, the temptation is just too strong. And Iā€™ve shown that it can work for everyone, as long as customers agree.

It has more to with how much money you have and how much access you have to datacenter hardware then it takes a bit of technical skill to be able to setup more then 500 and a memory to keep track of them all. Its not really about being butt hurt its more about where storj is headed, We already gotten a price decrease because of how many whales there are this will eventually push out any kinda decentralization that storj has portrayed onto everyone. Its gonna be whales against other whales and all data will be in 10 datacenters around the world instead of in every country spread across the world. Where ever there is money there will be a whale and a datacenter attached to it. If not a datacenter then people will be buying IPs but all data will be in one place.
And yes at this point storj wouldnt survive without whales for this reason because it takes money and lots of it to host this many servers around the world and the whales are putting the bill for storj so why would they try to stop it.

No itā€™s not, since there is no software rewritten or any attempt to interfere with the network. Alghough, I feel with you as I felt concerning using VPNā€™s to increase the storage size. This one is kind of the opposite. These are kind of loopholes, which are not quite aligned with the spirit of STORJ.

Especially incubating drives, is undermining the idea of the vetting process. Which many pros and cons already have been discussed here. And from previous posts, I also understood some other fellas are doing comparable things (incubating large drives, but only offering a small part so the held back amount is low; increasing the drive size later).

At one side @Th3Van is serving the network very well, renting storage. Since heā€™s running a data center, I am tempted to believe itā€™s professional so the data is being stored more robust than many small operators do (including RAID 6, although Iā€™m not being fan of it, in each case some kind of redundancy with chance of recovery; instead of RAID0 which I at least do).

At the other side, this would mean of the 3PB heā€™s offering only a meager 40-100TB would be used since then there would be an equillibrium between the amount of uploads and deletions if they were behind one public IP. But if I see the overview, thatā€™s not what happened, and also not the case: th3van.dk literally says there are 3-4 SNā€™s behind one /24-subnet. Meaning, al these storagesnodes that were down were treated like 125 unique storagenodes before (107 if I check the overview).

The real question in my opinion should be: can this be a threat to the network as a whole. Than thereā€™s some math involved in that question, I did before.

Primary data is:

  • Ca. 21000 SNs
  • Ca. 12000 unique /24-subnets
  • This fella is using 107 unique subnets.

For every piece offered on the network:

  • Chance he will get it: 107/12000 = 0.9% (probably a bit higher, being in Amsterdam and having a good connection; but taking this for calculation).

Dividing 80 pieces, which is the case at the STORJ-network:

  • Chance he will get no piece: ((12000-107)/12000)^80 = 51.2%
  • Chance he will get one piece: C(80,1)*(107/12000)^1*((12000-107)/12000)^79 = 35.5%
  • Chance he will get two pieces: C(80,2)*(107/12000)^2*((12000-107)/12000)^78 = 12.5%
  • Chance he will get three pieces: C(80,3)*(107/12000)^3*((12000-107)/12000)^77= 2.9%
  • Chance he will get four pieces: C(80,4)*(107/12000)^4*((12000-107)/12000)^76 = 0.5%
  • Chance he will get five pieces: C(80,5)*(107/12000)^5*((12000-107)/12000)^75 = 0.1%
  • Chance he will get six pieces: C(80,6)*(107/12000)^6*((12000-107)/12000)^74 = 0.01%
  • Chance he will get more than six pieces: 100%-[previous numbers] = 0.00081% (=1/123000).

This means chance of rendering 6 or more pieces redendancy-wise useless, is quite nihil. Even if they were al divided over a network with whales like him and they would have all an independend uptime of 80+%, it would be nearly impossible to reach the point that 59 pieces were not available. This actually is also a plea, to lower the N-number of the Salomon-Reeds-equation used by the STORJ-network at the moment.

1 Like

Storj is also dealing with the possibility that all Russiaā€™s nodes may at some point be cut off.

4 Likes

Yeah, so? Then why 80, of which 51 redundancy duplicates?
Even then, most data is also from Russian customers in that case.
If you can hand over better metrics to explain why, than Iā€™m more than interested. But I actually donā€™t see, why so many duplicates are necessary.

This is absolutely NOT the case.

You donā€™t need this advantage, and honestly probably wouldnā€™t be much of an advantage anyway as drives that old wonā€™t have the capacity / efficiency of newer drives, plus youā€™d have drives dying left and right (relatively speaking). Personally I would rather buy brand new helium filled power efficient high capacity drives that will be reliable long termā€¦ and yes, at scale IS profitable even with the expense of purchasing multiple IPā€™s. This is quite literally the only way running Storj nodes are profitable. At least to the point where itā€™s actually worth your time messing around with. And letā€™s face it, if thereā€™s no profit in it, nobody (with few exceptions of course) other than those hoping for that potential someday (most of which donā€™t actually understand the economics of it in the first place) will run nodes and Storj would evaporate into thin air.

You canā€™t expect Storj (or any other similar platform) to pay you more than a ā€œwhaleā€ simply because your smaller and have higher operating costs. This simply wonā€™t happen.

1 Like

Alright, some data to back that up?
I mean, as far as I know, weā€™re al in the dark concerning the reasons why.
As far as I know, 1625 of 12441 /24-subnets are Russian, consisting of 2774 separate nodes.

So, the metrics again:
Dividing 80 pieces, which is the case at the STORJ-network (assuming equal distribution):

  • Chance Russia will get no piece: ((12441-1625)/12441)^80 = 0.01%
  • Chance Russia will get one piece: C(80,1)*(1625/12441)^1*((12441-1625)/12441)^79 = 0.02%
  • Chance Russia will get two pieces: C(80,2)*(1625/12441)^2*((12441-1625)/12441)^78 = 0.12%
  • ...

I did it in Excel, showing this:

Amount of pieces Chance Cumulative <= N
0 0,001% 0,001%
1 0,016% 0,018%
2 0,098% 0,116%
3 0,382% 0,497%
4 1,104% 1,602%
5 2,522% 4,124%
6 4,737% 8,861%
7 7,523% 16,384%
8 10,314% 26,698%
9 12,396% 39,094%
10 13,223% 52,317%
11 12,642% 64,960%
12 10,922% 75,881%
13 8,583% 84,464%
14 6,171% 90,635%
15 4,080% 94,715%
16 2,490% 97,205%
17 1,408% 98,613%
18 0,741% 99,354%
19 0,363% 99,717%
20 0,166% 99,883%
21 0,071% 99,955%
22 0,029% 99,983%
23 0,011% 99,994%
24 0,004% 99,998%
25 0,001% 99,999%
26 0,000% 100,000%
27 0,000% 100,000%
28 0,000% 100,000%
29 0,000% 100,000%
30 0,000% 100,000%
31 0,000% 100,000%
32 0,000% 100,000%
33 0,000% 100,000%
34 0,000% 100,000%
35 0,000% 100,000%
36 0,000% 100,000%
37 0,000% 100,000%
38 0,000% 100,000%
39 0,000% 100,000%
40 0,000% 100,000%
41 0,000% 100,000%
42 0,000% 100,000%
43 0,000% 100,000%
44 0,000% 100,000%
45 0,000% 100,000%
46 0,000% 100,000%
47 0,000% 100,000%
48 0,000% 100,000%
49 0,000% 100,000%
50 0,000% 100,000%
51 0,000% 100,000%

As you can see, this risk is covered with 25 piecesā€¦ Maybe some more, for data of Russian customers, who arenā€™t allowed to stay on Storj anyway in case of cut off. So again, please some metrics to cover the matterā€¦

For example, if you would have taken N=60 for K=29 in the Reeds-Salomon-comparison:

Amount of pieces Chance Cumulative <= N
0 0,023% 0,023%
1 0,203% 0,226%
2 0,900% 1,126%
3 2,614% 3,740%
4 5,597% 9,336%
5 9,417% 18,754%
6 12,970% 31,723%
7 15,032% 46,755%
8 14,962% 61,717%
9 12,988% 74,704%
10 9,951% 84,656%
11 6,796% 91,452%
12 4,169% 95,621%
13 2,313% 97,934%
14 1,167% 99,100%
15 0,537% 99,638%
16 0,227% 99,865%
17 0,088% 99,953%
18 0,032% 99,985%
19 0,011% 99,996%
20 0,003% 99,999%
21 0,001% 100,000%
22 0,000% 100,000%
23 0,000% 100,000%
24 0,000% 100,000%
25 0,000% 100,000%
26 0,000% 100,000%
27 0,000% 100,000%
28 0,000% 100,000%
29 0,000% 100,000%
30 0,000% 100,000%
31 0,000% 100,000%

You see, even 20 would be sufficient in this caseā€¦ (because of lower chance of pieces ending up in Russia). And you would have 11 of them for other occurences, like loss and downtime of other nodes.

2 Likes

If even Russia has so small probabilities, than why there is so many spoken about datacenters who have 100 nodes and 100IP, then there is probabilities that even 2 pieces is almost nothing or even nothing. then it more looks that people are envy that they cant do such setups and therefore try to make that no one can.

All hardware, that Iā€™m using for storj nodes are placed in our DC, that are equipped with cooling system, automatic diesel generator, fire extinguishing system and multi ISP uplinks (BGP AS49974).

I think you are mixing my Primary nodes and my Backup nodes together :

Primary storage nodes : (Running on dedicated hardware - list can be found at www.th3van.dk)

  • Number of nodes : 105 (001 and 032 are not present since they do not exist)
  • Number of IPā€™s (/24 subnets) in use for the 105 storage nodes : 30 (~3,5 nodes per subnet)
  • Number of SAS HDDs for SN data : 105 (no RAID - one HDD per storage node)
  • Total available space for SN : 1.986 TB
  • Used space for SN : 1.191 TB
  • Free available space for SN : 795 TB
  • First node joined : 13-07-2021

Backup storage nodes : (Running on other dedicated hardware - but where down for a couple of hours, a few days ago)

  • Number of nodes : 501
  • Number of IPā€™s (/24 subnets) in use for the 501 storage nodes : 1
  • Number of SATA SSDā€™s for SN data : 8 (Samsung 4 TB in RAID 6)
  • Total available space for SN : 24 TB
  • Used space for SN : 17 TB
  • Free available space for SN : 8 TB
  • First node joined : 22-12-2021

Th3van.dk

I think 10 is a little low of an estimate there even if your only accounting for datacentersā€¦ however how is this not still decentralized relative to typical cloud infrastructure? Look at whatā€™s happened to crypto mining in general. Itā€™s become centralized in the sense that itā€™s centralized around people with money to invest, but still decentralized by nature among them and around the world. This has been fought over and over again by using different methods, largely using different algorithms to resist ASICs, but even so it was always an inevitability. This is just simply how the world works.

As Iā€™ve said before, the answer to the whale issue is simply more whalesā€¦ but if you donā€™t have the technical skills to run something at that scale you simply shouldnā€™t be doing itā€¦ especially those of you who seem to be so concerned with the risks whales supposedly pose to the networkā€¦ risks of which are already factored into Storjā€™s model.

One way to look at it as far as Iā€™m concerned, is that Storj (maybe on purpose, maybe accidentally) has done a pretty good job of weeding out those with the technical skills to run scaled setups from those who might otherwise acquire 100ā€™s of TBs of data before ever running into even a minor issue they canā€™t resolve on their own. And trust me, scaled setups definitely see more issues.

Furthermore, whales are actually financially incentivized to keep their servers running, and thereā€™s a heavy cost associated with any downtime. If this wanā€™t the case, there would be no whales in the first place. This is also largely the reason behind ā€œnode incubationā€. If a whale happens to loose a node for whatever reason, they want to replace it as quickly as possible with a node thatā€™s ready to go in order to utilize the hardware and maintain their efficiency as quickly as possible to minimize losses.

In the case of whales, the vetting process is essentially pointless as thereā€™s already a pretty high probability those nodes wonā€™t be going offline anyway, but clearly it doesnā€™t hinder anything either so thereā€™s really no point in discussing it.

On the other hand, youā€™d be better off arguing about node incubation in terms of the held amount which would actually carry some weight. However if Storj sees this as an issue, the simple solution would be to forego the current holding model for one that simply holds a specific amount proportional to the repair cost relative to the size of the node that adjusts over time as opposed to an ambiguous figure based on node performance during a predetermined period of time. End of discussion.

3 Likes

Even amazon has way more datacenters do we consider them decentralized?

Again this doesnt require technical skills it requires money lots and lots of money

I build my setup with very small investment, most of it paid by storj over time, i just expand step by step
you just have to know or want to learn how to do it.

Ok, then youā€™re using a total of 106 /24-subnets. Doesnā€™t matter that much for the story, I would say.

But there are actually tree things, I donā€™t understand:

  • 3PB was down which you attributed to the 501 nodes down, but these figures seem not to add up actually, since that would mean 6TB/node instead of the 24TB stated here.
  • Whatā€™s your plan with them? 501 is quite manyā€¦
  • Since theyā€™re all behind one IP, you should be on the equilibrium of the amount of deleting balancing ingress. But direct show from your data. Or did I overlook something, like older data is being deleted less?

Although, I just say, itā€™s a clever approach. Especially if you started them all, back then when you still got an application fee.

Thatā€™s exactly the point Iā€™m trying to make, as I was also making when we had the same kind of discussion about the use of VPNā€™s. They arenā€™t that bad after all from the perspective of the network. But, because they need to increase the redundancy by one or two pieces, it feels a bit unfair to those who donā€™t have the skills, arenā€™t clever or both; who are being paid a little bit less because of it (less ingress is less STORJ-token, more redundancy is less payment per piece).

They have their own redundancy, but itā€™s a single company in control of itā€¦ so no.

Yes it does require money. But you canā€™t tell me hosting 4 TB on a Raspberry Pi with an external hard drive is the same as hosting 100ā€™s + across many servers, maintaining the equipment, managing technical issues, incorporating UPS systems, backup generators etc as many whales do. Itā€™s not at all the same thing.

Its not rocket science, It all takes money 1 server that I own could host more then 200 nodes easily, Yet I choose not to scale up because its not cheap to start 200 nodes If I was to host Id need at least 18TB hard drives x200 and each of those drives for me would cost 500 dollars new. Not to mention id need 200 ips to go with that. So It is mostly about money sure not anyone can run 200+ nodes espically not on a rpi4 this is more hobby level not professional level, Technical skills are about the same. Deep pockets is a different problem.