Volunteer for orderdb is locked fix

I might have a solution for the issue but it is so trivial that I am not sure about my own test results. It would be nice if someone could double check my result.

First of all I want to explain how to debug these kind of issues. More power to the community :slight_smile:
We have an guide here: Guide to debug my storage node Now we have an example to try it. /mon/funcs is showing me this:

[7829512224628212148] [storj.io/storj/storagenode/orders.(*Service).cleanArchive](https://slack-redir.net/link?url=http%3A%2F%2Fstorj.io%2Fstorj%2Fstoragenode%2Forders.(*Service).cleanArchive)
parents: 314990732229963913
current: 0, highwater: 1, success: 1, errors: 0, panics: 0
success times:
0.00: 1m45.693569024s
0.10: 1m45.693569024s
0.25: 1m45.693569024s
0.50: 1m45.693569024s
0.75: 1m45.693569024s
0.90: 1m45.693569024s
0.95: 1m45.693569024s
1.00: 1m45.693569024s
avg: 1m45.693570498s
ravg: 1m45.693569024s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s

My databse is locked for more than 1 minute. This is the bad query we need to fix!

I have a fix ready and would appreciate some help to double check my results. Who can show me a similar output and is able to open a sqlite db from command line?

3 Likes

My all DB’s was vacuumed, and issue is gone, but I post my results for comparison with other:

curl -s localhost:7777/mon/funcs

[2747460344053707592] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 2106176561701804950
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 536.909312ms
    0.10: 536.909312ms
    0.25: 536.909312ms
    0.50: 536.909312ms
    0.75: 536.909312ms
    0.90: 536.909312ms
    0.95: 536.909312ms
    1.00: 536.909312ms
    avg: 536.909296ms
    ravg: 536.909312ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Have a good example from another location with J3455 and 2 HDD in RAID1:

curl -s localhost:7777/mon/funcs

   [7989219397104763343] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5919260118370286668
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 1.15921664s
    0.10: 1.15921664s
    0.25: 1.15921664s
    0.50: 1.15921664s
    0.75: 1.15921664s
    0.90: 1.15921664s
    0.95: 1.15921664s
    1.00: 1.15921664s
    avg: 1.159216586s
    ravg: 1.15921664s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

PS. sorry, copied wrong segment before

[3333600422168270371] storj.io/storj/storagenode/storagenodedb.(*ordersDB).ListUnsentBySatellite
parents: 8166009907792249104
current: 0, highwater: 1, success: 1, errors: 0, panics: 0
success times:
0.00: 4.091235328s
0.10: 4.091235328s
0.25: 4.091235328s
0.50: 4.091235328s
0.75: 4.091235328s
0.90: 4.091235328s
0.95: 4.091235328s
1.00: 4.091235328s
avg: 4.091235307s
ravg: 4.091235328s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s

[6176267383194382450] storj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive
parents: 2029347870328026421
current: 0, highwater: 1, success: 0, errors: 1, panics: 0
error ordersdb error: 1
success times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s
failure times:
0.00: 10.008972288s
0.10: 10.008972288s
0.25: 10.008972288s
0.50: 10.008972288s
0.75: 10.008972288s
0.90: 10.008972288s
0.95: 10.008972288s
1.00: 10.008972288s
avg: 10.008971837s
ravg: 10.008972288s

[2021013015339235150] storj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).DeleteExpired
parents: 6853422500963213883
current: 0, highwater: 1, success: 1, errors: 0, panics: 0
success times:
0.00: 5.048730112s
0.10: 5.048730112s
0.25: 5.048730112s
0.50: 5.048730112s
0.75: 5.048730112s
0.90: 5.048730112s
0.95: 5.048730112s
1.00: 5.048730112s
avg: 5.048729892s
ravg: 5.048730112s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s

and a few more similar sections.

CleanArchive however was 0s.

[8300399263887421623] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 2356506235585210977
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 58.499584s
    0.10: 58.499584s
    0.25: 58.499584s
    0.50: 58.499584s
    0.75: 58.499584s
    0.90: 58.499584s
    0.95: 58.499584s
    1.00: 58.499584s
    avg: 58.499585987s
    ravg: 58.499584s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

3.64TB node v 0.26.2

[3402708192170055487] storj.io/storj/storagenode/orders.(*Service).cleanArchive
parents: 611693002954375862
current: 0, highwater: 1, success: 3, errors: 0, panics: 0
success times:
0.00: 1m22.447990784s
0.10: 1m23.98837678s
0.25: 1m26.298955776s
0.50: 1m30.149920768s
0.75: 3m16.897984512s
0.90: 4m20.946822758s
0.95: 4m42.296435507s
1.00: 5m3.646048256s
avg: 2m38.747983031s
ravg: 2m38.747983872s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s

[6617357016406631384] storj.io/storj/storagenode/orders.(*Service).cleanArchive
parents: 680352504211195851
current: 0, highwater: 1, success: 1, errors: 0, panics: 0
success times:
0.00: 10m9.194606592s
0.10: 10m9.194606592s
0.25: 10m9.194606592s
0.50: 10m9.194606592s
0.75: 10m9.194606592s
0.90: 10m9.194606592s
0.95: 10m9.194606592s
1.00: 10m9.194606592s
avg: 10m9.194595396s
ravg: 10m9.194606592s
failure times:
0.00: 0s
0.10: 0s
0.25: 0s
0.50: 0s
0.75: 0s
0.90: 0s
0.95: 0s
1.00: 0s
avg: 0s
ravg: 0s

Please be careful. Stop the node, make a copy of the order DB, don’t execute any of the following commands while your storage node is running. Just for safety.
If the fix works remember that we all have to undo the changes in order to be able to install the next release. I will write down the commands for that as soon as we have the test results we are looking for.

Add this to your config file:

  • storage2.orders.cleanup-interval: 1h0m0s

If you know how to open the order db please execute the following 2 commands. If you don’t know how to open the file please don’t even try it. This test is not worth getting into any trouble. Update the config only and report your results will help us as well.

  • CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);
  • VACUUM;

Did it on J3455 2xHDD RAID1:

Before apply fix (VACUUM only):

[7989219397104763343] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5919260118370286668
  current: 0, highwater: 1, success: 2, errors: 0, panics: 0
  success times:
    0.00: 1.15921664s
    0.10: 1.320994534s
    0.25: 1.563661376s
    0.50: 1.968106112s
    0.75: 2.372550848s
    0.90: 2.615217689s
    0.95: 2.696106636s
    1.00: 2.776995584s
    avg: 1.968106134s
    ravg: 1.968106112s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

After apply fix:

[2591409128394679333] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 6121040160923027883
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 3.055235ms
    0.10: 3.055235ms
    0.25: 3.055235ms
    0.50: 3.055235ms
    0.75: 3.055235ms
    0.90: 3.055235ms
    0.95: 3.055235ms
    1.00: 3.055235ms
    avg: 3.055235ms
    ravg: 3.055235ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Another node, before apply fix (VACUUM only):

[2747460344053707592] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 2106176561701804950
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 536.909312ms
    0.10: 536.909312ms
    0.25: 536.909312ms
    0.50: 536.909312ms
    0.75: 536.909312ms
    0.90: 536.909312ms
    0.95: 536.909312ms
    1.00: 536.909312ms
    avg: 536.909296ms
    ravg: 536.909312ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

After apply fix:

[2540274537434400169] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5337544433457757867
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 13.54504ms
    0.10: 13.54504ms
    0.25: 13.54504ms
    0.50: 13.54504ms
    0.75: 13.54504ms
    0.90: 13.54504ms
    0.95: 13.54504ms
    1.00: 13.54504ms
    avg: 13.54504ms
    ravg: 13.54504ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

So, indexes is must have :+1:

1 Like

That was my first impression as well but after 24h the query will have to clean up too many entries and will take a few seconds. The locking is short but still there. I hope the combination of reduced interval and the index will do it. At the moment I keep my storage node running for a few hours to get a good average over a few more executions.

From my side after 2h running:

Node on J3455 2xHDD RAID1:

[2591409128394679333] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 6121040160923027883
  current: 0, highwater: 1, success: 3, errors: 0, panics: 0
  success times:
    0.00: 3.055235ms
    0.10: 6.095388ms
    0.25: 10.655619ms
    0.50: 18.256004ms
    0.75: 18.76733ms
    0.90: 19.074125ms
    0.95: 19.17639ms
    1.00: 19.278656ms
    avg: 13.529964ms
    ravg: 13.529965ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Another node:

[2540274537434400169] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5337544433457757867
  current: 0, highwater: 1, success: 2, errors: 0, panics: 0
  success times:
    0.00: 4.495681ms
    0.10: 5.400616ms
    0.25: 6.75802ms
    0.50: 9.02036ms
    0.75: 11.2827ms
    0.90: 12.640104ms
    0.95: 13.092572ms
    1.00: 13.54504ms
    avg: 9.02036ms
    ravg: 9.02036ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Still much better than before.

Will post next after few hours.

[5270893380334046654] storj.io/storj/storagenode/storagenodedb.(*ordersDB).CleanArchive
  parents: 4503427673448512931
  current: 0, highwater: 1, success: 5, errors: 0, panics: 0
  success times:
    0.00: 382.490752ms
    0.10: 392.940518ms
    0.25: 408.615168ms
    0.50: 550.771904ms
    0.75: 630.984064ms
    0.90: 722.644403ms
    0.95: 753.197849ms
    1.00: 783.751296ms
    avg: 551.322637ms
    ravg: 551.322624ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

So far without any error messages in the logfile.

After 4h running:

Node on J3455 2xHDD RAID1:

[2591409128394679333] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 6121040160923027883
  current: 0, highwater: 1, success: 4, errors: 0, panics: 0
  success times:
    0.00: 3.055235ms
    0.10: 7.615465ms
    0.25: 14.455811ms
    0.50: 18.300368ms
    0.75: 18.578213ms
    0.90: 18.998478ms
    0.95: 19.138567ms
    1.00: 19.278656ms
    avg: 14.733656ms
    ravg: 14.733657ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Another node:

[2540274537434400169] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5337544433457757867
  current: 0, highwater: 1, success: 4, errors: 0, panics: 0
  success times:
    0.00: 180.22µs
    0.10: 1.474858ms
    0.25: 3.416815ms
    0.50: 5.210296ms
    0.75: 7.829943ms
    0.90: 11.259001ms
    0.95: 12.40202ms
    1.00: 13.54504ms
    avg: 6.036463ms
    ravg: 6.036463ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

After 6h running:

Node on J3455 2xHDD RAID1:

[2591409128394679333] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 6121040160923027883
  current: 0, highwater: 1, success: 6, errors: 0, panics: 0
  success times:
    0.00: 3.055235ms
    0.10: 10.655619ms
    0.25: 18.278186ms
    0.50: 18.811694ms
    0.75: 20.78774ms
    0.90: 25.973999ms
    0.95: 28.315614ms
    1.00: 30.65723ms
    avg: 18.480437ms
    ravg: 18.480438ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Another node:

[2540274537434400169] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5337544433457757867
  current: 0, highwater: 1, success: 6, errors: 0, panics: 0
  success times:
    0.00: 180.22µs
    0.10: 2.33795ms
    0.25: 4.852988ms
    0.50: 8.983939ms
    0.75: 12.584265ms
    0.90: 13.154869ms
    0.95: 13.349954ms
    1.00: 13.54504ms
    avg: 8.158919ms
    ravg: 8.15892ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

PR is open: https://review.dev.storj.io/c/storj/storj/+/254

2 Likes

In order to avoid issues with the next release we now have to delete the manual created index.

  • DROP INDEX idx_order_archived_at;

I still have the config change in place. That one will not be a problem for the next deployment.

2 Likes

I wait a release. After this post my stats in this thread.

@littleskunk Thanks a lot!

Last portion of results after 15h run:

Node on J3455 2xHDD RAID1:

[2591409128394679333] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 6121040160923027883
  current: 0, highwater: 1, success: 15, errors: 0, panics: 0
  success times:
    0.00: 3.055235ms
    0.10: 16.107975ms
    0.25: 18.811694ms
    0.50: 30.65723ms
    0.75: 39.031804ms
    0.90: 43.188388ms
    0.95: 50.727087ms
    1.00: 66.605224ms
    avg: 29.588094ms
    ravg: 29.588094ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Another node:

[2540274537434400169] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 5337544433457757867
  current: 0, highwater: 1, success: 14, errors: 0, panics: 0
  success times:
    0.00: 180.22µs
    0.10: 4.280375ms
    0.25: 6.730719ms
    0.50: 10.028559ms
    0.75: 12.584265ms
    0.90: 23.260583ms
    0.95: 34.65393ms
    1.00: 48.080224ms
    avg: 12.712215ms
    ravg: 12.712216ms
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

In order to avoid issues with the next release we now have to delete the manual created index.

  • DROP INDEX idx_order_archived_at;

Index is removed.

Can I ask you also about pieceinfo.db? Maybe we should add an index for this DB too (this db is a large ~600MB on my side)? What do you think about it?

PS. Sorry, I checked pieceinfo.db and see that it already have indexes:

index|pk_pieceinfo_|pieceinfo_|CREATE UNIQUE INDEX pk_pieceinfo_ ON pieceinfo_(satellite_id, piece_id)
index|idx_pieceinfo__expiration|pieceinfo_|CREATE INDEX idx_pieceinfo__expiration ON pieceinfo_(piece_expiration) WHERE piece_expiration IS NOT NULL

It seems you’re already done testing this. These are my numbers.

[3445931424723951819] storj.io/storj/storagenode/orders.(*Service).cleanArchive
  parents: 574352697452883070
  current: 0, highwater: 1, success: 6, errors: 0, panics: 0
  success times:
    0.00: 1.414455936s
    0.10: 1.915118784s
    0.25: 2.69612032s
    0.50: 3.606134784s
    0.75: 3.925831424s
    0.90: 4.491372672s
    0.95: 4.732360256s
    1.00: 4.97334784s
    avg: 3.337542081s
    ravg: 3.337541888s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

I also never saw any database is locked errors.
So my node may not have been the best test case to begin with.

While looking I did find some others that take a little long. Though some not always.

[2644439985444218709] storj.io/storj/storage/filestore.(*Dir).WalkNamespace
  parents: 7504712759751080778
  current: 0, highwater: 1, success: 4, errors: 0, panics: 0
  success times:
    0.00: 898.7504ms
    0.10: 1.060652998s
    0.25: 1.303506896s
    0.50: 9.704375616s
    0.75: 20.809126912s
    0.90: 25.918969446s
    0.95: 27.622250291s
    1.00: 29.325531136s
    avg: 12.408258393s
    ravg: 12.40825856s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

[1761555305208943280] storj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite
  parents: 4937451960000796809
  current: 0, highwater: 1, success: 1, errors: 0, panics: 0
  success times:
    0.00: 56.01257472s
    0.10: 56.01257472s
    0.25: 56.01257472s
    0.50: 56.01257472s
    0.75: 56.01257472s
    0.90: 56.01257472s
    0.95: 56.01257472s
    1.00: 56.01257472s
    avg: 56.012573486s
    ravg: 56.01257472s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

[7504712759751080778] storj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces
  parents: 1761555305208943280
  current: 0, highwater: 1, success: 4, errors: 0, panics: 0
  success times:
    0.00: 899.776768ms
    0.10: 1.062679052s
    0.25: 1.30703248s
    0.50: 10.309496384s
    0.75: 23.005384192s
    0.90: 29.897900646s
    0.95: 32.195406131s
    1.00: 34.492911616s
    avg: 14.002920596s
    ravg: 14.002920448s
  failure times:
    0.00: 0s
    0.10: 0s
    0.25: 0s
    0.50: 0s
    0.75: 0s
    0.90: 0s
    0.95: 0s
    1.00: 0s
    avg: 0s
    ravg: 0s

Most notalby SpaceUsedTotalAndBySatellite takes 56s. But I think all of these may just run on node start. Then again, that’s the most likely time people are looking at their logs and may notice errors.

I think my SSD Cache may help me out a lot with these operations. So numbers on my node may be lower than usual.

And then there are the downloads and uploads, which obviously can take some time. This all makes sense, I just found it interesting to see these numbers for my node.

[4693231147204235064] storj.io/storj/storagenode/piecestore.(*Endpoint).doDownload
  parents: 3668856111289194281
  current: 0, highwater: 12, success: 95989, errors: 5429, panics: 0
  error piecestore: 4300
  error grpc_NotFound: 1129
  success times:
    0.00: 104.383776ms
    0.10: 426.894982ms
    0.25: 3.178770432s
    0.50: 4.499580928s
    0.75: 7.71596864s
    0.90: 17.771147059s
    0.95: 20.122054963s
    1.00: 38.093918208s
    avg: 7.819379925s
    ravg: 6.805011968s
  failure times:
    0.00: 558.815µs
    0.10: 1.039024ms
    0.25: 1.85388528s
    0.50: 2.60231936s
    0.75: 3.662109568s
    0.90: 4.17608535s
    0.95: 4.501723392s
    1.00: 5.847044608s
    avg: 2.282009959s
    ravg: 2.495076352s

[5901729881191614557] storj.io/storj/storagenode/piecestore.(*Endpoint).doUpload
  parents: 3668856111289194281
  current: 21, highwater: 30, success: 252547, errors: 271, panics: 0
  error piecestore protocol: 271
  success times:
    0.00: 3.774882ms
    0.10: 35.420697ms
    0.25: 133.667172ms
    0.50: 2.121049984s
    0.75: 3.598667008s
    0.90: 5.666751744s
    0.95: 6.630482124s
    1.00: 12.148166656s
    avg: 3.199513655s
    ravg: 2.41478784s
  failure times:
    0.00: 51.637296ms
    0.10: 449.94814ms
    0.25: 1.446116224s
    0.50: 4.065157504s
    0.75: 9.156705792s
    0.90: 22.263640678s
    0.95: 4m32.357459558s
    1.00: 6m19.168489472s
    avg: 23.017425405s
    ravg: 29.395070976s

Fix is now deployed with v0.28.2 in production. Thank you all for your help :slight_smile: