It seems you’re already done testing this. These are my numbers.
[3445931424723951819] storj.io/storj/storagenode/orders.(*Service).cleanArchive
parents: 574352697452883070
current: 0, highwater: 1, success: 6, errors: 0, panics: 0
success times:
0.00: 1.414455936s
0.10: 1.915118784s
0.25: 2.69612032s
0.50: 3.606134784s
0.75: 3.925831424s
0.90: 4.491372672s
0.95: 4.732360256s
1.00: 4.97334784s
avg: 3.337542081s
ravg: 3.337541888s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s
I also never saw any database is locked errors.
So my node may not have been the best test case to begin with.
While looking I did find some others that take a little long. Though some not always.
[2644439985444218709] storj.io/storj/storage/filestore.(*Dir).WalkNamespace
parents: 7504712759751080778
current: 0, highwater: 1, success: 4, errors: 0, panics: 0
success times:
0.00: 898.7504ms
0.10: 1.060652998s
0.25: 1.303506896s
0.50: 9.704375616s
0.75: 20.809126912s
0.90: 25.918969446s
0.95: 27.622250291s
1.00: 29.325531136s
avg: 12.408258393s
ravg: 12.40825856s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s
[1761555305208943280] storj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite
parents: 4937451960000796809
current: 0, highwater: 1, success: 1, errors: 0, panics: 0
success times:
0.00: 56.01257472s
0.10: 56.01257472s
0.25: 56.01257472s
0.50: 56.01257472s
0.75: 56.01257472s
0.90: 56.01257472s
0.95: 56.01257472s
1.00: 56.01257472s
avg: 56.012573486s
ravg: 56.01257472s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s
[7504712759751080778] storj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces
parents: 1761555305208943280
current: 0, highwater: 1, success: 4, errors: 0, panics: 0
success times:
0.00: 899.776768ms
0.10: 1.062679052s
0.25: 1.30703248s
0.50: 10.309496384s
0.75: 23.005384192s
0.90: 29.897900646s
0.95: 32.195406131s
1.00: 34.492911616s
avg: 14.002920596s
ravg: 14.002920448s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s
Most notalby SpaceUsedTotalAndBySatellite takes 56s. But I think all of these may just run on node start. Then again, that’s the most likely time people are looking at their logs and may notice errors.
I think my SSD Cache may help me out a lot with these operations. So numbers on my node may be lower than usual.
And then there are the downloads and uploads, which obviously can take some time. This all makes sense, I just found it interesting to see these numbers for my node.
[4693231147204235064] storj.io/storj/storagenode/piecestore.(*Endpoint).doDownload
parents: 3668856111289194281
current: 0, highwater: 12, success: 95989, errors: 5429, panics: 0
error piecestore: 4300
error grpc_NotFound: 1129
success times:
0.00: 104.383776ms
0.10: 426.894982ms
0.25: 3.178770432s
0.50: 4.499580928s
0.75: 7.71596864s
0.90: 17.771147059s
0.95: 20.122054963s
1.00: 38.093918208s
avg: 7.819379925s
ravg: 6.805011968s
failure times:
0.00: 558.815µs
0.10: 1.039024ms
0.25: 1.85388528s
0.50: 2.60231936s
0.75: 3.662109568s
0.90: 4.17608535s
0.95: 4.501723392s
1.00: 5.847044608s
avg: 2.282009959s
ravg: 2.495076352s
[5901729881191614557] storj.io/storj/storagenode/piecestore.(*Endpoint).doUpload
parents: 3668856111289194281
current: 21, highwater: 30, success: 252547, errors: 271, panics: 0
error piecestore protocol: 271
success times:
0.00: 3.774882ms
0.10: 35.420697ms
0.25: 133.667172ms
0.50: 2.121049984s
0.75: 3.598667008s
0.90: 5.666751744s
0.95: 6.630482124s
1.00: 12.148166656s
avg: 3.199513655s
ravg: 2.41478784s
failure times:
0.00: 51.637296ms
0.10: 449.94814ms
0.25: 1.446116224s
0.50: 4.065157504s
0.75: 9.156705792s
0.90: 22.263640678s
0.95: 4m32.357459558s
1.00: 6m19.168489472s
avg: 23.017425405s
ravg: 29.395070976s