Test Data on the Storj Network and Surge Payouts for Storage Node Operators

I see. So audit data is no “special” kind of data from the satellites, it is based on real upload data. That makes it even sadder if real customer data cannot placed on a node as it is blocked with test data.

Yes. First of all the network exists for customer data not for holding test data. And if data gets deleted from your node or you get disqualified on one satellite it is much better if you are in the position to receive new traffic from other satellites immediately than having to wait up to 30 days to get vetted on other satellites.

Expansion does not really make sense when satellite data cannot be restricted as with current rate of appr. 200 GB test data flowing in it would require at least 6TB to cover 30 days and it is no guarantee that you have been vetted on the other satellites by then.

It would be much better if there was a way to limit the space for test data or to assign space to satellites.

We can complain about this all we want, the question is, how can that problem be solved?
do you see a way around it?
If so, please let us know.
I currently can’t think of a way for a node to get data for audits although being full

However, it has been said that test data will receive deletes and uploads so theoretically there should be data from the new satellite too, although chances might be significantly lower.

Yes, as long as your node has at least one piece from a satellite, it will be vetted eventually.

I like seeing my node full and even more like seeing a lot of egress. If Storj wants to waste their own money by paying me for storing test data, so be it. It’s much better than an empty node.
Why would Storj do that? Well one reason would be to guarantee that there is enough space available (for example if a new client wants to store 1PB of data, Storj can just delete 1PB of test data).

This is actually interesting - assuming there will be non-Tardigrade satellites in the future I may want to distribute the space.

My node exists to hold the customer data. That customer is whoever runs the satellite. Whether my customer stores his own data on my node or resells the space to someone else does not really matter to me as long as I get paid.

But it’s awesome, judging from the Storj posts, the test data will continue to come in for some time. This is great. I already picked what new drives I will get to expand my node once the space runs out.


Not from a technical perspective. It would require some kind of limitation how much a satellite can upload to keep vetting going for the other satellites. For the specific test satellite for SNO it might be helpful to limit the amount of space to occupy. So let’s say if I have a 2TB node I could set 1.5TB to be available also for test data but the rest exclusively for other satellites.
A temporary upload block could also help that SNO can lift when audits from the other satellites flow in. The test satellite is a bit special in a way as it is pushing soooo much data onto a node. I am getting 200GB per day currently and it is simple math to tell when the node will be full. Just full of test data.

Is it possible to estimate how much data must be present on a node for being vetted? Would it be sufficient theoretically if there is a single bit of data from a satellite present to get vetted?

1 Like

Yes, if your node has a single piece from the satellite, it will get vetted eventually, though it may take a longer time.

But if this is the case, wouldn’t be a solution easily be possible? Node knows which satellites exist, node knows from which satellites data is stored and node knows how much space is left. So when a node fills up, for the last remaining 5GB or something, he could report “space full” to all satellites he has already data from and “space available” to only those satellites that are missing or he carries only very few data. Maybe even SNO could set the threshold (vetting space).

With this a SNO could make sure that his node keeps receiving data but also that he receives data from all satellites so he keeps getting vetted on all satellites.

And even if a new satellite pops up, node would recognize that data is missing for this satellite. As soon as space gets available e.g. if customer deletes a few GBs, the node would reserve the vetting space for the new satellite and thus make sure, that the node has a chance to get vetted on the new satellite even if it is full.

How does this sound?


This looks interesting to me - reserving some space for new satellites.

Reversing space does not seem a good idea to me, if you have been running a node for 9+ months you will get 75% of the revenue if a new satellite comes along you will only get 25% of the revenue in the first month. Assuming you have a full or nearly full node what is the incentive to want to get vetted on the new one when your currently stored data is worth more with the older satellite?

See payout structure at https://storj.io/blog/2019/01/sharing-storage-space-for-fun-and-profit


Thinking about it:

  1. If SNO could set the threshold, then it is totally up to him. If he sets it to 0 no space will be reserved.
  2. Space will only be reserved if node does not hold already data from all satellites. If enough data is present, no need to reserve space.
  3. Reserved space might only be small share. If you run a 2 TB node and 5 GBs are reserved space, I don’t see it would hurt your earnings very much.

It is a fallback strategy. Vetting is expensive. It takes time up to 30 days while you receive only 5% of potential traffic from a satellite. As SNO has no control over the data he never can be sure the data he has today he still has tomorrow. If a customer decides to delete data it can be gone any time. If you get disqualified on a satellite, it is gone. In that case you will be happy if you can replace the lost data as quickly as possible with data from other satellites. Also when you expand your node. And the best position for that is if you have been vetted.
So basically the incentive is to be ready to receive data at 100% rate from any satellite at all times.

One thing I’m not sure about and might help the situation is …

How much of the “test” data is there for node Incentive only? and how quickly can it be removed from the network if required?

So when a node is getting full, to say 85 - 95%, It sends an update to the testing satellites who then start to remove some of the “incentive data”, freeing up room for customer data?

1 Like

Looks like stefan-benten satellite has started deleting data:


Mine too…

How low will the baseline go ?

I’m hoping it’s low enough to shuffle my drive space… been wanting to add some new services.

That satellite has about 2TB on my node IIRC, so, not a lot.

That statement comes from their testing pattern, and is not relevant for production, it can be less than 1% in prod, that will entirely depend on the use-case of their clients.

In my case is was not even 10%, it was 5%. So my statement (made over a month ago) was valid, 95% of my storage is almost free to the network.

For now the best way to run a de-centralized cloud storage would be on a centralised cloud storage, AWZ or a VPS.

In what way is it better to pay more for something like that?

im not paying more and can’t even write that comment as I need to write 20 chars

I’m sorry… what…?


Anyone else notice that test data is no longer comming in and now deletes are? Man i want more of that test data to come in

1 Like

Yes. I see deletes from all satellites but mostly the testing one.

Saltlake started deleting a lot of files.