Feedback using Tardigrade as normal User

Dear Storj Team,

I tested Tardigrade from cli and I think it is awesome.

However I have no real world use case as normal user for now. That’s bad

  1. I tried to setup Duplicati because I use Duplicati on QNAP for Backup. It gets connection.
    But this does not work reliable.

  2. I try the native S3 backup function from QNAP. Also not working.

It will be nice if Tardigrade can be use with Duplicati and QNAP for Backups.

hi @jensamberg
Glad you are enjoying the Tardigrade CLI user experience. i sorry to hear that you’re not finding use cases yet that work for your needs. Im curious, would you mind sharing what kinds of projects you usually work on, and what you’re seeking?

@jocelyn I think at first 100 % aws S3 shall be achieved. That Tardigrade can be used on NAS drives or open source projects like Duplicati as backup space

I would like to add that I was also planning on using tardigrade as a backup solution through the s3 gateway. In my case using Synology backup software. I think the issues aren’t necessarily on the end of tardigrade. Synology assumes vhost paths while tardigrade uses direct paths for buckets. Furthermore it only works over https. It would be really great if these backup tools would provide native support for tardigrade, but the change probably has to come from their clients. Either by supporting direct paths and http or by implementing native support, which would obviously be preferred.

1 Like

@jensamberg

Please run your s3 gateway in debug mode and let us know which errors it is printing out. Any s3 function we currently don’t support should throw an “not implemented error”. We can check out how difficult it would be to support it.

@littleskunk A cool. Please tell me how I can run it in debug mode and I will post the trace.

gateway run --log.level debug or add it to the config file.

@littleskunk

Here are log when I am using duplicati

2020-01-25T10:36:30.765Z DEBUG ecclient Uploading to storage nodes {“Erasure Share Size”: 256, “Stripe Size”: 7424, “Repair Threshold”: 35, “Optimal Threshold”: 80}
2020-01-25T10:36:30.840Z DEBUG ecclient Uploading to storage nodes {“Erasure Share Size”: 256, “Stripe Size”: 7424, “Repair Threshold”: 35, “Optimal Threshold”: 80}
2020-01-25T10:36:31.558Z DEBUG ecclient Upload to storage node failed {“Node ID”: “1qWUXHag6Hd7R9AdkjJh3JMYedppZZhuX9JzEXMnhtVfsJir2k”, “error”: “protocol: expected piece hash; serial number is already used: usedserialsdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:276\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:225\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:985\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”, “errorVerbose”: “group:\n— protocol: expected piece hash\n\tstorj.io/uplink/piecestore.(*Upload).Commit:229\n\tstorj.io/uplink/piecestore.(*BufferedUpload).Commit:45\n\tstorj.io/uplink/piecestore.(*LockingUpload).Commit:105\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece.func3:237\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:267\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113\n— serial number is already used: usedserialsdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:276\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:225\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:985\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-01-25T10:36:31.796Z DEBUG ecclient Success threshold reached. Cancelling remaining uploads. {“Optimal Threshold”: 80}
2020-01-25T10:36:31.796Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “3J2PY6QPZXJBHA2RLWCOZ56CJW4EP3VXU4MUB5XEFSJYIFQDR6IA”, “Node ID”: “12PSMMCAdyyGNMgB4WesFbemPRPZKBRHpKSwkgEDQRTRGoFFPB5”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:242\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.796Z DEBUG ecclient Upload to storage node failed {“Node ID”: “12KKddcFE38spj8VPrSUMKQALaA4FW4jbdaJV14TK1Ds6RSHybx”, “error”: “protocol: expected piece hash; context canceled”, “errorVerbose”: “group:\n— protocol: expected piece hash\n\tstorj.io/uplink/piecestore.(*Upload).Commit:229\n\tstorj.io/uplink/piecestore.(*BufferedUpload).Commit:45\n\tstorj.io/uplink/piecestore.(*LockingUpload).Commit:105\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece.func3:237\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:267\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113\n— context canceled”}
2020-01-25T10:36:31.797Z DEBUG ecclient Upload to storage node failed {“Node ID”: “12PSMMCAdyyGNMgB4WesFbemPRPZKBRHpKSwkgEDQRTRGoFFPB5”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:242\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.796Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “VFI476IU24JPAIKXQRLHQXEM4XH6CTNAVHF2TUBGQ5OH5FLYFMHA”, “Node ID”: “1E1Bb1F12Za3DJwvHz2WiF34yXGZRxnCqjihozNHNq1gXhDmuk”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:262\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.796Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “PQF7774QW5VC37Z4JUZAYKGCWZR2Z6MVIGBOWDNDL7PZVI23ENDQ”, “Node ID”: “12FADiHgiPU1c6LoNGBF34fEQR9vuJFBXXiB5xBPyCdfgZrAvCi”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:262\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.797Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “VZOE4J7LRSQKX4IR54CSI7OZSIHDWF5XKM63TM4SUEPGASLE2SLA”, “Node ID”: “12B5suGhHi6Wfqx562gRRxm9zD4nuvb2Tonrk9tzkERnzSgRngf”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:242\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.797Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “RR4NTOENDLH27IE4N4HON4RGSZBYDEN4YBXWMZ7JTVY5556LRXCQ”, “Node ID”: “18yCrSwXdN81boinuUvrAyJ88utHrbXrnEnJBFUSTURgT7tB49”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:242\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.797Z DEBUG ecclient Failed dialing for putting piece to node {“Piece ID”: “7ZMOBGIFTRTSOKFK6K3ILA5L5PU7K2SE2MWAU54SCJODXKOHXGYQ”, “Node ID”: “1QuhrCAZyzS4actXxojcbuc2ozqvTKDEtXx8trcvJyyjkTJ3VR”, “error”: “piecestore: rpccompat: context canceled”, “errorVerbose”: “piecestore: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:262\n\tstorj.io/common/rpc.Dialer.dial:219\n\tstorj.io/common/rpc.Dialer.DialNode:127\n\tstorj.io/uplink/piecestore.Dial:51\n\tstorj.io/uplink/ecclient.(*ecClient).dialPiecestore:68\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:198\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113”}
2020-01-25T10:36:31.797Z DEBUG ecclient Upload to storage node failed {“Node ID”: “144BC984zXp8VsYusBfdzvCZcKqWa1d3FJuamMJG27cFRWRaVB”, “error”: “protocol: expected piece hash; context canceled”, “errorVerbose”: “group:\n— protocol: expected piece hash\n\tstorj.io/uplink/piecestore.(*Upload).Commit:229\n\tstorj.io/uplink/piecestore.(*BufferedUpload).Commit:45\n\tstorj.io/uplink/piecestore.(*LockingUpload).Commit:105\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece.func3:237\n\tstorj.io/uplink/ecclient.(*ecClient).PutPiece:267\n\tstorj.io/uplink/ecclient.(*ecClient).Put.func1:113\n— context canceled”}

Did that transfer actually fail? From the log it seems everything went fine.

Looks like you have problem with DB:

image

I could be wrong, but I’m pretty sure that’s an error thrown by a receiving node. Hence why it only shows up as a debug line in the gateway log. Shortly after that it says it successfully finished the 80 required transfers. So that single missed transfer shouldn’t matter.

2 Likes

Oh, sorry, I thought this log from storage node.

So I tried it on QNAP with debug log. On QNAP I received no outputs on the debug console