Only facts that EU-North satellite only for space reservation

GrolaG has in what few posts and comments i have read of his come off quite offensive… not that there is anything wrong with that… that’s just how some people are…

it can be a bit of a challenge to not get wrapped up in all the big words and accusations being throw around…

isn’t the real question here not GrolaG’s personality or approach to communication…

but if he actually lost a file… the man could be right…
isn’t that what truly matters here… personally i couldn’t careless about how he phrased it…
i just wonder if he was right or wrong…

So he may have lost a file on a satellite where he can’t even sign up based on observations of not getting much repair traffic from that satellite?
I must be smoking something… I hope you read the previous messages in this thread…

That’s the thing, he didn’t lose any files. He just infers from the fact that he doesn’t see repair traffic from this satellite, that customers must have lost files and there can’t be any other reason.

And this attitude he shows in almost all of his posts I read. I really don’t think it’s the language barrier. His English is much better than for instance Vadim’s, but they are like night and day in terms of what they bring to the community. Vadim is one of the top contributing members here, even though I have sometimes problems to understand him :wink:


“that’s just how some people are” is never an excuse for poor behaviour and can’t be acceptable.
However, language barriers can be problematic and some things can be understood the wrong way.

That said, I can only agree to this:


Hey you all!

I can try/will share some insights into whats going on and what is not.

First of all, i can definitely say, that the thread title is NOT correct. Both europe-north-1 and saltlake are hosted the exact same way. They even match all config values of those our customer opened satellites are configured with.
It might seem very strange, that europe-north-1 does not have any (or at least barely any) repair traffic. But as many previous replies stated, its a coincidence of a couple of factors:

  • The data that currently gets uploaded and has been uploaded via europe-north-1 is younger than every file hold by the saltlake satellite. This simply means, that data of either of the customer registration opened satellites or saltlake have to sustain the network influences much longer than the one on europe-north-1.

  • We have many nodes joining and some leaving the network every day. Especially the leaving part as significantly decreased, especially those that just leave without a graceful exit. This fact alone is a pretty huge influence into the amount of repair work ahead of a given satellite.

  • The repair threshold has been lifted to 52 pieces, causing much more frequent repair across the board to get better metrics around the velocity and affect of the above mentioned node churn and the impact of ongoing repair across the network. Lets not forget that per default all satellites share the nodes at the current stage (besides nodes that partially left some satellites).

All those points lead to a typical scenario, that the repair work tends to build up fairly quickly (as with the current upload speed to a given satellite, many segments reach that state at once) and then creates, the at this stage seemingly, “normal” repair throughput, that eg. saltlake is handling. We do expect to get into that zone on the europe-north-1 satellite soon, according to my rough calculations/estimations by end of this month.

Furthermore many of the tests we do against europe-north-1 and saltlake are mostly to test the internals of the satellite, under different load scenarios. As a sideeffect, we definitely upload huge amounts of data, but as all of this is frequently accessed and paid, it is a WIN-WIN situation for everyone, right?

TLDR: The data on europe-north-1 is pretty new/fresh compared to the data on the other satellites and especially lower node churn (without graceful exit) helps to extend that timeframe without any repair. In addition to our technically not needed high repair threshold, the repair load is intentionally higher than planned long term. We do not run any space reservation uploads or plan on doing those.

PS: As the operator of, i can assure that we have very exciting plans for the future of it. Sadly that’s all i can share so far. So, stay tuned!


I did (in v2).

Interesting, because uplink need only 35 (maybe bug?)

And why saltlake began to be repaired from the moment it was launched (without “lag” window)?

Does this mean that both satellites are used for tests?

(BrightSilence, sorry, I accidentally seleted you as recipient.)

No, you didn’t. You didn’t read what you signed up for. Each contract lasted 90 days in v2.


The repair threshold is configured in uploads to be 35 indeed. We have an override flag for our detection in the code to repair earlier:

As mentioned earlier saltlake and the other satellites have been repairing much more and earlier due to the much higher node churn (those that did not do a graceful exit). If the data simply leaves the network, then the satellite has to do the repair, whereas with a graceful exit, the pieces are moved to other nodes in the network.

I can confirm, that both saltlake and europe-north-1 are ( for employee only ) test satellites. This said, i am pretty sure that for the case of europe-north-1 this obviously by the name is subject to change.

As @donald.m.motsinger already answered by now, V2 had the flaw, that it only stored the data for 90 days, after which per design a contract renewal between bridge and storagenodes was planned. This sadly was never implemented, leading to the deletion of the data from storagenodes after 90 days, while the bridge still believed the node holds the data ( in good hope at least :stuck_out_tongue: ) .
This is only one of the reasons why V3 today exists and superseeded V2 by an order of magnitude already.

Let me know, if there is still something unclear, i am happy to help!


So we have 3 (!!!) test satellites!
The question arises, why create a new satellite and turn off the old one? Why not keep using the existing ones? In addition, the percentage of payments for many approached 100% … But wait a minute I think that it’s profitable.

  1. The satellite was stopped using almost immediately after that many nodes began to receive 100%.
  2. The satellite is not used, but it is not turned off either, which means that the deposit is not paid

By the way, @Odmin ! I remember that you were interested to know the statuses of the satellites. Look at Stephan’s message :slight_smile:

Whats the problem with 3 test satellites?

Mine specifically is not shutdown in the sense of “shutdown”, rather than repurposed soon.
In terms of why have multiple satellites to test? Because there are tests that have to be executed side by side to notice difference more granular and rule out other environmental issues.
In terms of the hold escrow i can assure you, that all nodes will get their amount paid when the satellites is fully shutdown. That noted, my satellite currently still holds 25TB of “user” data, coming in at ~75TB in the network.

On another note, keeping a satellite running without any load on it, will cost more money than its worth to withold the payment for SNO’s. Again, these assumptions are not helpful for a proper and fact based discussion.

Look at Stephan’s message

My Name is Stefan. Thanks for keeping an eye on it.


pretty sure that he started out claiming he lost a file… or that’s how i understood it…
i was just pondering if he was going to produce any actual proof of his claim…

[quote=“donald.m.motsinger, post:27, topic:8149, full:true”]

i also believe that in that case after some discussion we ended finding out that he had plenty of backups … so even that he lost his files on the v2 network he still had them… lol
this is to much … my anger meter is hitting the red…
and now i feel like a troll for kicking life back into this… tsk tsk

but hey, a great update from stefan so thats going to be interesting to see what that secret project is about… i guess this thread was not a total waste then

“One will never reach ones destination, if one stops to throw stones at every dog that barks on the way…” winston churchill :smiley:


Hi @stefanbenten !

I am really nice to see you here! you are a very rare guest on this forum :slight_smile:

Thanks for explaining! I was a little bit confused with north satellite

P.S. I don’t know why everyone compares different month of repair traffic, I can share the current status of this month:

Storage in PB*h

As you can see, storage about the same

And I also can confirm that repair traffic is different (north is lowest)

1 Like

But what about this?

Sorry, my mistake.


I think that’s based on the local uplink settings, not the satellite settings.

It didn’t. I just look back on my node that was on that satellite since it launched. The first 2 months it had a few hundred KB of repair traffic. Month 3 only a few hundred MB. Starting month 4 it finally got into the GB’s. Sounds pretty damn similar to europe-north-1, which is now in its 4th month and is showing 1GB repair already.

Edit: Should have scrolled down more, some of this was already answered.


Shutdown from its current task :slight_smile:
I cannot share more details yet, but as it probably is well estabillished, it would be a shame to bury it.

Hey @Odmin!

Comparing the same months is not a good idea. You might want to compare the months since the satellite is “alive”. Example given:

Satellite A joins 05/2000.
Satellite B joins 10/2000.

If you upload data from 05/2000 straight and keep the same pace going after 10/2000, then Satellite A will hit his first noticable repair at, lets say, 02/2001. (~8 months existance). At the same time, its unlikely that Satellite B will have the same amount, as the data on it is only 3 months old. Based on our current observations and same network behavior in terms of node churn and growth, Satellite B should not start significant repair before 07/2001.
I hope this makes it a little bit clearer.


Thanks a lot! It absolutely clear now :+1:
Your explains is always clear and I like it.
I can say again, I really nice to see you here! :slight_smile:


Main problem with “game” with “held rotation”. However, I was even more surprised to hear news about your satellite.

In any case, it will be extremely interesting to know how the satellite will work. Yesterday I had about 30-35% transmission errors on exit. The nodes are either out of place or too slow. And there are very few of them.

UPD. Сan you give the number of active nodes with an accuracy of hundreds?

I can encourage you to run a full storj-sim setup yourself or dive into operate and enclosed environment yourself to test such things. You will notice pretty quickly, that many things couple very tightly together.

In terms of your definition as a “game”, i can see why you feel like this, but at the same time i can assure, that we do not want to “play”, rather than go at a quicker pace. If it is more comfortable for you, we would shut down either of the test satellites and go at like 33% development speed. Just seeing it from the negative the entire day, will not help any of us. Flip side of your “game” is that if you are vetted on all satellites, and they keep the same load up, you will earn $$$. :slight_smile:

In terms of network node count i can share without a doubt that we surpassed 42. :smiley:
Nerd jokes aside, we have ~8.1k nodes that reported/offered a disk_free space value to my satellite in the last hour.