Interesting what datacenter is down?

In business, what is in the contract is definitive. Things that the contract do not explicitly forbidden are permitted. The moment you start invoking unwritten moral claims, you lose.

Let’s wait for the proposed new T&C document.

2 Likes

Well, this specific thing is explicitly prohibited:

5.1.7. Manipulate or alter the default behavior of the Storage Network to artificially increase or decrease the value of any reputation factor of any Storage Node;

5.1.12. Manipulate or alter the default behavior of the Storage Network to artificially increase or decrease the value of any reputation factor of any Storage Node;

(yes, it’s so important that is apparently listed twice)

Incubating the nodes is artificially increasing their reputation.

Also, one might argue that this “incubation” process being not explicitly authorized falls under this as well:

5.1.17. In any other way attempt to interfere, impede, alter, or otherwise interact in any manner not expressly authorized hereunder with the Storage Services or the operation of any other Storage Node(s).

Not true. The ToS says not to “manipulate or alter the default behavior”. In other words, manipulating the code or trying to trick it somehow. Running more than one node at the same time definitely does not violate this section as it does not “manipulate” or “alter” their behavior in any way. It is not against the ToS to run more than one node.

i made an suggestion based on this topic here

I’m specifically referring to vetting nodes on one hardware/system/enviromnet and then deploying them to operate to another hardware/system/evironment they are not vetted on. This is a bypass of a vetting process. Tricking it to work right away, without properly vetting in the new environment.

In terms of the whale argument, I’ve said this before and I’ll say it again…

Without whales, Storj will never work… period!

Think about this for a second.
The whole concept is supposedly to use “unused” storage space and not spend any money. Ok fine, great concept but quite literally impossible in reality. Sure maybe while Storj is just starting out and have very little data, and as long as enough people are willing to make a hobby of running Storj nodes for fun and basically for free, but beyond that it simply ain’t happening.

How about some numbers…
How much data might the average person have just laying around? What size hard drives does your average computer come with these days? Even gaming PCs usually only come with 1TB drives out of the box but a gamer is going to have that filled pretty quick. So let’s just use the minimum of 550 GB per node that your average person might have available, and well also just ignore the reality of using the same hard drive you use for everything else for the sake of this.

Now out of everyone that has this extra space, how many have it on PCs that are literally powered on all the time? Most computers go to sleep… to like… save power, ya know? Now, how many of these people have the technical skills to do this? How many people do you know in your own family that know what a port is?

Taking it a step further, how many of these people even know about the Storj project?

How many people currently run nodes? Tell you what, let’s just say all ~22,000 are individual people each running a node… in reality it’s probably about 1/10th that, but that’s ok.

22,000 people with 550 GB and PCs that just happen to be powered on all the time.
That’s less than 12 PB. Nearly HALF of CURRENTLY stored data.

Many of you think a whale going off line will cripple their network but the truth is without people buying hardware it would already be crippled. Additionally, if they tried to get rid of whales now it would still cripple them. The only way it maybe wouldn’t is if they changed their whole mantra and told everyone else to hurry up and go buy hardware to absorb the whales data which they won’t do due to the liability factor. Why do you think they tell you not to buy anything? Furthermore, this would move a lot of data off enterprise grade hardware and onto lower powered PC’s, SBCs and probably a lot of slower external hard drives.

The fact of the matter is your all just butthurt YOUR not a whale… I get it… so go learn how. Go acquire the technical skills they clearly have that you don’t and figure it out. If they can do it so can you, nothings stopping you. Fight the whales by becoming whales yourselves… wouldn’t that further decentralize the network anyway? The only way to make money in data storage is at scale. You know it… whales know it… and Storj knows it!

Eventually everything becomes centralized to the degree that it can. The only difference with decentralized projects like Storj or crypto mining is that anybody can play the game. That doesn’t mean though that someone with no money and knowledge can profit from it the same as someone who does. Don’t confuse decentralized with equal opportunity.

1 Like

Sorry, in what way? They gain reputation the same way as any other node, by proving they are capable of holding data over a long period of time.

5.1.17. does not apply either. The operator does not put any impediments to node operation, Storj Inc. does by limiting traffic to these nodes based on the /24 rule. Why would this be the operator’s fault?

then they are moved to another system, that wasn’t proven to be as stable as the incubator. The problem is proving on one system and running on another. The whole point of vetting is to ensure that the current system that runs the node can maintain uptime and service level.

Ok, let’s say I start a node on a Raspberry Pi, then move it to an enterprise grade server. Did I just violate the ToS? Let’s say I did… would you sit there and argue with me that I shouldn’t have done that? Let’s go the other way around… if I started a node on an enterprise grade server then moved it to a Pi… lots of people run nodes on Pis… is that against the ToS?

This line of logic forbids any hardware changes to the system the node runs on. Good luck with that.

2 Likes

I don’t know if it is against ToS as written today, but clearly if you “substantially” change the underlying platform the vetting status shall be invalidated.

Because you tested reliability on one platform and now running completely different one. That different one wasn’t vetted. Irrespective of which one is enterprise vs rpi.

I hope I made my point clear.

How to fix this? I I don’t know.

in the ideal world, node could self-report hardware and environment changes, and reset the vetting status. (Literally: upgraded ram — bam, start vetting process again)

In practice this is neither possible to implement properly, nor enforce, for obvious reasons. So the next best things is a policy requirement to prohibit moving nodes or to have a mechanisms to reset vetting status voluntarily after the move. This of course also won’t happen, so the next-next best thing — prohibit incubation or any hardware change at the ToS level. Joes would ignore it, but big fellas will abide.

Otherwise just throw away vetting process in the first place, it would be meaningless. Software was tested, right? Deploy it!

Yes. The compromise is somewhere in the middle, and is to be found. But clearly the incubate in one place and deploy in the other is just a “this one trick to get around vetting the target system”, and if nothing else is indicative of a problem in logic here.

It doesn’t really matter either way. If a node fails to perform at any point it will eventually be suspended. There are already things in place to deal with all this. Pretty sure the vetting process is more to make sure the node is going to stay online for more than 10 minutes before starting to load a bunch of data onto it. The vetting process doesn’t stress test the hardware or anything as far as I’m aware. And if it can run on a Pi it can pretty much run on anything. I really don’t know why you think this is such a big deal.

Well, the moved node would bypass that check

It’s really simple. Regardless of what is the reason behind vetting process, today it’s mandated for new nodes, but not moved nodes. It’s a contradiction and hence indicates a problem.

Only two possibilities exist:

  • vetting is not necessary: cancel it for everyone.
  • vetting is necessary: then moved node needs to be vetted again, for exact same reason the original one was.

See the contradiction? The same reasoning you are justifying not needing to vet moved node can apply to the freshly created node.

I would not guess, I’m prefer to wait for a new TOS. However, I passed your question to the team.

The idea of the held amount is introduced to solve several problems:

See How does held back amount work? - Storj Docs

However, if you starting to incubating nodes, this idea become not so appealing, because a held amount is minimal, so it would not help to incentive Operator to run their node as long as possible. From the other hand to do so you need to run your node anyway all that time, so the initial incentive is still working.

Regarding vetting process. We always trying to make it to being not longer than a month. The unstable nodes usually fail in the first month, so this gets some savings to repair costs too. Of course, incubating nodes will not help to prevent this failure when the node would be moved on unreliable hardware. However, for a new Node Operator it’s still working, while they learn how to manage the node, what is preferred setup for them, and so on. And probability for a new Node Operator to loose the node is higher than for a SNO, who did run their first node and managed to keep it running after the vetting process too.
So I would say that vetting process should remain for new nodes anyway.

I do not think that we should limit the Operator in their rights to move nodes or run multiple nodes behind the same /24 subnet in ToS. I’m prefer to have technical solutions implemented in the protocol instead, especially if it could affect the customers data.

4 Likes

Because the current GE process is so lengthy when I have needed to recover space by deleting nodes the withheld percentage has never been a consideration since it is so much cheaper to lose that amount instead of buying further storage. It’s just easier, simpler and cheaper to write it off.

1 Like

Please people, don’t let that heat further, think about solutions!

there is my suggestion

and i feel that vetting is not the big deal anymore, since my second node alredy gets good ingress.

maybe an suggestion for the ge process is worth thinking about, is somewhere info how it works?

GE dont work for lot of people:

  1. you cant make it first 6 months.
  2. it is very slow process and it cold be optimized.
2 Likes

And here’s the argument that I already voiced: vetting hardware is not practical. Vetting operator’s capabilities is, and this is the function that the current implementation of vetting actually performs.

Note that a skilled operator may be able to move a node from a failing hardware. Is this not a prefered action, as opposed to fail the node and run all of the repair traffic? Especially knowing that skilled operators will operate larger nodes, so the impact of the repair traffic on the network would be bigger. Yet this is something you suggest to ban.

1 Like

BTW, the discussion is not helped by the fact that given the recent ratio between vetted and unvetted nodes, the original goal of reducing the amount of data stored on new nodes would now be achieved by disabling the vetting process. A case where implementation does not fulfill intentions.