Storage Nodes Full of Test Data

I have some of my small nodes full of test data and therefore the traffic Egress/Ingress is around 0.
No egress traffic when a node is full does not make sense to me but it is ok, Testing rules are like that.

Since I don’t want to have my nodes full of test data when Production is launched, I would love to know when all this test data (only the test data) will be deleted to get some room for real data.

We have 2 issues here.

  1. Zombie segments that are difficult to clean up: Design Draft: Zombie Segments Cleaner
  2. Gargabe collection is still disabled. Hu I can’t find a design document for that. Looks like we missed to post it. I will try to find it. :slight_smile:

We are getting closer to a solution but I can’t give you any eta.

1 Like

Thanks for your answer. Knowing that there are plans to release all this data files is ok to me.
I will analyze the behaviour of my full nodes in the meanwhile. I do not expect much egress since in “test mode” it seems to be a function of the available disk, but i need to confirm it (soon i hope).

1 Like

Since my node-disk is full (defined maximum), the egress traffic is round 0 too. This happens with all 2000 Nodes during the testphase until there is a solution (in weeks or months)? I can’t imagine. Maybe I misunderstood. I know, it’s not mining, but 0 traffic feels wrong.

1 Like

Please note that we already answered your question in the support ticket you filed. Please could you help us by not double posting the same question here on the forum and also opening a support ticket. We undestand that you are eager to get the answer asap, but this creates additional work for us as we have to doublecheck here and in the support tickets if this is the same issue. Thank you for your understanding that answers on the forum may not arrive immediately at all times, especially on a weekend. So unless you have a very urgent issue that keeps your node from running, please try to resolve questions here first.

If that means me:

I have not filled out a support ticket and wait for a answer here. No stress from me. Enjoy your weekend.

Wording in the ticket was exactly the same as your last post, so I am sorry if someone else copied and pasted your comments into a support ticket, in that case, my comment is addressed to them.

The zero egress is related only to the one test case:

  1. Try to upload some test data on node
  2. If upload is successful, try to download some test data back

It is not related to other test cases, and/or customers data/tests. However, this is the case, when your node contains only the test data and no customers’ data.
I think it would not affect 2000 nodes with such constraints.
But we trying to address those cases too, thank you for your participation and patience!

By the way, if your node is affected - you can run a second one with a new identity, if you have another HDD. We simplified the authorization token acquirement process recently.

1 Like

I can also provide details on a similar scenario.
My node is full, and have been receiving little traffic since beta 2.

I masked out my node id, but have no problems disclosing it if need be.
Also, no errors according to the logs:

2019-11-24T07:27:14.426Z        INFO    piecestore      downloaded      {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:27:24.373Z        INFO    version running on version v0.26.2
2019-11-24T07:28:02.813Z        INFO    piecestore      download started        {"Piece ID": "HFIKCXZYZAJM3YAYC3YFOA6F5M5J42O2ANNA5I73MZEL6NOCV2SA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:28:05.584Z        INFO    piecestore      downloaded      {"Piece ID": "HFIKCXZYZAJM3YAYC3YFOA6F5M5J42O2ANNA5I73MZEL6NOCV2SA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:28:06.171Z        INFO    piecestore      download started        {"Piece ID": "H2BOQB5ADCO7IT66KTL2PFXC2RGH2DPJP2UL75UOQBLBOK2B6Z4Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:28:09.281Z        INFO    piecestore      downloaded      {"Piece ID": "H2BOQB5ADCO7IT66KTL2PFXC2RGH2DPJP2UL75UOQBLBOK2B6Z4Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:28:13.868Z        INFO    piecestore      download started        {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:28:14.479Z        INFO    piecestore      downloaded      {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:28:39.069Z        INFO    piecestore      download started        {"Piece ID": "KLMIOYPHNPVMVIKW6ITLXTFZEIJRDDDO5KTMTVMRZIBKAOYY5FVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
2019-11-24T07:28:39.213Z        INFO    piecestore      downloaded      {"Piece ID": "KLMIOYPHNPVMVIKW6ITLXTFZEIJRDDDO5KTMTVMRZIBKAOYY5FVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
2019-11-24T07:29:13.731Z        INFO    piecestore      download started        {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:29:14.411Z        INFO    piecestore      downloaded      {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:29:26.150Z        INFO    piecestore      download started        {"Piece ID": "KQYJEKWZBTHN2BA5EALADVDCEZSWXHAIK6N4Q3LMUT2OGOU2BPEQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_AUDIT"}
2019-11-24T07:29:26.197Z        INFO    piecestore      downloaded      {"Piece ID": "KQYJEKWZBTHN2BA5EALADVDCEZSWXHAIK6N4Q3LMUT2OGOU2BPEQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_AUDIT"}
2019-11-24T07:30:13.765Z        INFO    piecestore      download started        {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:30:14.372Z        INFO    piecestore      downloaded      {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:30:34.951Z        INFO    piecestore      download started        {"Piece ID": "APGK76ATDCIJ4FUQCK4HMSMSPKBNA6MU744EPBTZAW5HVI663HPQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:30:37.280Z        INFO    piecestore      downloaded      {"Piece ID": "APGK76ATDCIJ4FUQCK4HMSMSPKBNA6MU744EPBTZAW5HVI663HPQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2019-11-24T07:32:08.237Z        INFO    piecestore      download started        {"Piece ID": "QAJMI22C6RUEDRIIGTQ7EFWHKUHVTHXILWRUXWWXPQKYFQXQNHEA", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET"}
2019-11-24T07:32:12.090Z        INFO    piecestore      downloaded      {"Piece ID": "QAJMI22C6RUEDRIIGTQ7EFWHKUHVTHXILWRUXWWXPQKYFQXQNHEA", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET"}
2019-11-24T07:32:13.788Z        INFO    piecestore      download started        {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2019-11-24T07:32:14.410Z        INFO    piecestore      downloaded      {"Piece ID": "6RIJLZBGPLXBWUV2M374OLF7TAPJUGQONA75SO2ULRRU34WZZAFQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
1 Like

@Alexey

Thank you for answer but now i am more confused. Sorry for that.

It looks like the same problem here (my log is without “space-error”): "piecestore protocol: out of space(...)" - action required?

The solution there is to wait.

Is this my/our solution too?

Is there a chance to see “normal” traffic in the next weeks or need the “only-testdata-problem” more time?

It’s not a problem to get not much egress for a time (now 1% from normal egress) but to have a “death” node without a solution makes no sense.

1 Like

I don’t know how to predict the humans behavior, so I can’t give any insight, sorry.

You still paid for usage. I do not see any problems here. One of my node do not have any traffic from customers one of satellites. This is normal. I wish to have more traffic, but we highly depends on customers. Testers are customers too.
We always ask to use only what you have now, something what be online anyway. In that case it will costs you nothing.
You can’t build all strategy on the fact of performing some tests by testers. They could just stop or doing other tests or do nothing.
There is no predictable egress, ingress or space usage. It is near to impossible to make assumptions on previous behavior. What we (SNO) could (and should) do - keep ours nodes online and do not lost the data.

3 Likes

In october there were a lot of “detele” messages in the log and many GBs were removed, so I think it is possible to delete test data somehow. What is the issue now?

Just want to highlight that smaller nodes get filled of test data in the 1-3 month period were 75% of storage node revenue is withheld.

Perhaps, nobody deletes?

Deleting test data is on Storj side, I guess. Maybe waiting for garbage collector.
I had to remove unused data

Please, do not remove data from your node yourself, it will be disqualified for that.

2 Likes

This does not apply to all smaller nodes. I myself have a small node running for much more than 3 months now that is not even half full yet. Not everyone running nodes has great bandwidth and RAM.

Sure the node will be disqualified, but when I have a few MB/day instead of tens of GB/days of egress trafic when I have some space available, the count is quickly made. Shut down this unprofitable node and apply for a new one.
The problem is the business model used by Storj, they build a completely new technology and want to align their price with AWS. They should rather based the pricing on the service cost. Bandwidth cost for the nodes is the same if they checkthere mail once a day or upload 130 TB/month (400Mbs * 1 month). But storage with a 97% availability has a cost, electricity, HDD, computer. The egress should be drastically reduce, a $1/TB would be sufficient and the storage greatly increased like $15/TB/month this would make a service using the unused bandwidth, rather than been a system specialized in cold storage. With a serious problem in garbage collection.

3 Likes

This is a real issue for me. Once my 3TB node filled up, I am getting no egress, thus no incentive to run the node anymore. The amount received for the storage space is too little.

The escrow is going to keep SNOs interested for so long, eventually they are going to hard exit.

The combination of escrow and lack of graceful exit is creating a business model that is not going to be relevant for long.

3 Likes

Yes absolutely, currently the most profitable option is to close your node when its full and open a new one.

2 Likes