Another data wipe

Yep i know this question has been up a few times before.

Is there going to be another data wipe for the test data?
When Storj launch v3 will the test data be wiped clean?

And yes im asking because I only have 60GB left om My hdd🙂, and when launced it seems like test data should be wiped

As previously mentioned there won’t be any more data wipes as real customer data is on V3. In my opinion if devs want they can send a delete instruction to delete all test data. This was previously done when some nodes were full.

3 Likes

I remember that. Freed Up some spaceđź‘Ť.
I know that we now have costumer data but we still have test data on. And i guess alot of it.

Thanks for answer. Well probably see what happens to the test data

Didn’t know storj had gotten a costume designer as a customer :business_suit_levitating::wink:

Hahahahahaha​:smile::joy: im just that funny

Downside of using the phone.

1 Like

Yay ongoing data deletion. (No idea if thats the right way to spell it):grinning:

I noticed something odd and have filed a ticket for the same. Try searching your log for “delete failed”. If you don’t find it its all good else we have a problem.

Ill check for that. Thanks @nerdatwork
Could it be for a file you never recived?

No for my node satellites are deleting same file twice. Second deletion results in failure which lowers my audit score :frowning:

That sounds strange. I did a fast check and I didnt have any failed deleted

3%20more%20files%20being%20deleted%20twice

I have 7 more such files. With more deletions I suspect this number is only going to increase.

1 Like

I have reported this problem to the dev team. Hope they will rectify this asap.

1 Like

I got this:

2019-09-21T14:56:05.821Z        INFO    piecestore      deleted {"Piece ID": "TABP7OL3HFF7SVLF2A7GVHSVKTUHLLMH32RPGTDF7LGWGBIJXTDQ"}
2019-09-22T00:03:01.315Z        INFO    piecestore      download started        {"Piece ID": "TABP7OL3HFF7SVLF2A7GVHSVKTUHLLMH32RPGTDF7LGWGBIJXTDQ", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET_AUDIT"}
2019-09-22T00:03:01.316Z        INFO    piecestore      download failed {"Piece ID": "TABP7OL3HFF7SVLF2A7GVHSVKTUHLLMH32RPGTDF7LGWGBIJXTDQ", "SatelliteID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET_AUDIT", "error": "rpc error: code = NotFound desc = file does not exist"}

The satellite tried to audit a piece that was deleted.

I also have a lot of multiple deletions of the same piece, like @nerdatwork

2019-09-21T06:41:42.629Z        INFO    piecestore      deleted {"Piece ID": "HCT2R6K7YZE64CXJ57XSB3K636RAG2RWABMKEKTUWD3W2FCYZEDQ"}
2019-09-21T07:35:01.624Z        ERROR   piecestore      delete failed   {"Piece ID": "HCT2R6K7YZE64CXJ57XSB3K636RAG2RWABMKEKTUWD3W2FCYZEDQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*Store).Stat:80\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:170\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:257\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Delete:136\n\tstorj.io/storj/pkg/pb._Piecestore_Delete_Handler.func1:1134\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorUnaryInterceptor:38\n\tstorj.io/storj/pkg/pb._Piecestore_Delete_Handler:1136\n\tgoogle.golang.org/grpc.(*Server).processUnaryRPC:940\n\tgoogle.golang.org/grpc.(*Server).handleStream:1174\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696"}
2019-09-21T07:54:52.267Z        ERROR   piecestore      delete failed   {"Piece ID": "HCT2R6K7YZE64CXJ57XSB3K636RAG2RWABMKEKTUWD3W2FCYZEDQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*Store).Stat:80\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:170\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:257\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Delete:136\n\tstorj.io/storj/pkg/pb._Piecestore_Delete_Handler.func1:1134\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorUnaryInterceptor:38\n\tstorj.io/storj/pkg/pb._Piecestore_Delete_Handler:1136\n\tgoogle.golang.org/grpc.(*Server).processUnaryRPC:940\n\tgoogle.golang.org/grpc.(*Server).handleStream:1174\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696"}
1 Like

Also reported. Thanks for the logs.

A failed delete won’t affect a node’s reputation, so no need to worry about that. Devs are looking into @Pentium100 claim that the satellite attempts to audit a piece that is already deleted.

1 Like

@Pentium100 How long ago the piece was deleted before being audited? We cannot avoid race conditions where the satellite selects a segment to audit and it is deleted at the same time. If the audit fails for this reason, the satellite detects it and won’t penalize the nodes.

@Dylan …

It was 9 hours after the deletion.
And it did impact my score.

Just after the update, the audit stats were like this:

{
  "totalCount": 49040,
  "successCount": 49002,
  "alpha": 19.99999999999995,
  "beta": 4.4e-323,
  "score": 1
}

And now they are like this:

{
  "totalCount": 50614,
  "successCount": 50575,
  "alpha": 19.559873331348204,
  "beta": 0.44012666865176525,
  "score": 0.9779936665674117
}

The difference between totalCound and successCount went up by 1 and the only audit failure in the log is the one that was for the deleted piece. There are no other audit failures, not even the recoverable ones (context canceled).

Thank you for discovering the error :slight_smile:

confirm

at my place:

docker logs storagenode 2>&1 | grep “delete failed” -c
104

docker logs storagenode 2>&1 | grep “file does not exist” | grep “GET_AUDIT” -c
2

and some of the “deleted” are audited unsuccessfully :frowning:

1 Like