Big deletions happening?

I see one node which lost more than 100 GB between yesterday and today by deletions.
Anybody else see the same?


Hello! Yes, I have seen deletes on my nodes starting 07.07, but since yesterday they slowed down/ stopped.


It was more the other way around. There was a problem with the US satellite. We triggered restore from trash and paused GC for a few days to make sure GC doesnā€™t increase our problem. At the end we have been able to fix the root cause and reenabled GC. Restoring files from trash and pausing GC only means the deletes are getting postponed and that is what you are currently seeing on your nodes.


And thatā€™s why we never delete anything from our nodes, not even trash. Good thing that system is in placešŸ˜…

Thanks for the update!


Iā€™m curious about the potential fragility of the Storj network. Thereā€™s just a small handful of satellites- would they not be a weak link in the scheme of this whole thing? Are there plans to increase the number of satellites or make that more distributed?

Each satellite is already more distributed than you might think. A DB node or a API pod can crash without the customer noticing it.

1 Like

i have a GE on europe-north-1.tardigrade which seems to have stalled out completely for over 14 days, could this be related to this issue on the network.?

Could be. Could you please file a support ticket?

i assume you mean using this link?

or should i just make a new topic.

that is the correct link


Does it repeat?
One of my node has 1 tb trash :slight_smile: Another has 300-700 Gb trash
Anybody else see the same?

Yes, a big deletion, but itā€™s normal. i suppose. I have seen this equal fluctuation in past.

Iā€™m agree. But just wanna know itā€™s not my hardware fault and itā€™s really everyone has. I feel a little panic after seeing 1 tb trash :slight_smile:

In another Thread someone mentioned that the STORJ team enabled garbage collection ~2 days ago. It was turned off for the last weeks, so it is probably cleaning up all nodes right now.

1 Like

If it would be your hardware, corrupted pieces unlikely will be moved to the trash unless they are deleted.
And @striker43 is right - itā€™s likely the garbage collector. This is also mean that your nodes perhaps missed some deletion messages, so they could have had a downtime in the past.

1 Like

Looking at my resource graphs, I think it was started on 22nd Nov. around 6pm CET.

Thanks for the explanation :slight_smile:

See this post: