Who is considered a Whale SNO?

Hi to all,
I have a question for the entire community because I couldn’t find the answer.

Who is considered a Whale SNO?

The term occasionally gets thrown around so I was wondering what are the conditions.


I would definitely consider @Vadim :face_with_hand_over_mouth:

1 Like

I think someone who has

  1. More than 10TB stored
  2. More than 1 storagenode
  3. More than $1000 in earnings
1 Like

i think this is low, it would basically include any SNO over 20 months with more than 10 TB capacity, no matter if their second node was old or anything…

so someone who joined about 20 months ago and had free capacity since would just have to add a second node on the same global IP and they are considered a whale…

thats just downright offensive to whales… it’s a lot of work being a whale lol
them raspberry pi whales… hehe

there has to be a better measure than that.


I agree I`m not aware in case of Storj

I used to be on SIA for about 6months than gave up, But they had a system for detecting systems under the same owners so they wouldn’t be able to take down the network. Now I’m aware that Storj is protected against this because there are 12K nodes and the way the data is split

I was thinking of some dynamic definition. Like percentage of total network stored on your nodes. Because this would take into account people that have one massive node and those that have multiple nodes geo separated.

1 Like

it certainly becomes a tricky question, also with the network continually growing it might never really be a problem again… and i think storj also uses latency or such to identify nodes within a certain region, or it has been mentioned that they was or wanted to do that.

but so far as i can tell the data distribution is still rather even between all nodes, if a single SNO could take down the network that would be a problem…

also nodes running from the same place will have the same issues and thus when they have collective downtime and such the network adapts because it’s basically alive regenerating if needed, so this would most likely make the odds of a single “collective” taking down the network highly unlikely…

and at one point even if we exclude entire nations i doubt the network would falter, seemed to do fine during the massive issues over the last couple of years i’ve been around…
like the burning of the OVH cloud data center or the internet backbone misconfigurations that brought down a large part of the states some time ago.

also running many nodes is extremely intensive as it seems new nodes take about the same share of IO as the larger older nodes, so running new nodes its about as intensive as running older nodes, and thus starting up has even lower reward than if one just studies the basic numbers.

even a 500gb node while have millions of files which the host will have to deal with.
i know some people have tried to run thousands of nodes, but that didn’t turn out well lol
because they had little to no idea about just how intensive a workload it really was.

anyways i think a % of the network might be a much more fair measure and that just leaves the question of how much and i suppose we are talking ‰ of the network.

and really it should be resistant up to like % ranges… so yeah… good luck if anyone wants to try to bring down the network that way… and thats just considering if the system was dumb… which i don’t think it is…

today we look to be at 9.24PB thats 9240 Terabytes, so a 100TB collective of nodes would be barely be in the 1% ranges


1 Like

Not so sure about 3 but I agree on the other two. I’d more say “Earns ~$100 a month.”

Vadim is a Blue Whale! Seriously, he has done an awesome job on his setup and he deserves every success.


Possibly, he is the one

1 Like

Here you go :sweat_smile: :upside_down_face:

1 Like