Payout feedback

Ok, here we go

Storagenode1 12PYwN5yzFr9Jsqy1qKdYQd8BvGENKczALx3QtakkwMCU5v55BM

total payout: $7.0448 x 5 = $35.224

Storagenode2 12Ldtzwu1bu3LeXVRe8gXKYugHcuPSJaQk98XEyHbgw8HwRzivc

total payout: $1.5798 x 4 = $6.3192

I just started a third storagenode, but that’s only 16 cents.

So I should expect ~$41.50 for October. Yesterday’s exchange rate was ~$0.143/STORJ at the time of the transactions, so I would expect ~290 Storj, but I only got 258.71.

I think I found the mistake. The second node doesn’t appear to have gotten the surge payment. If I calculate again without surge payment for the second node, then everything is spot on. The second node was setup in August and this blog post says

If you set up your node during the surge payment window, you’ll receive four times the normal payout during the months when surge payments are active.

Should nodes setup after July get the surge payments? If yes, then there seems to be a problem on your side.

The satellite is showing the same numbers including the hold back amount and surge pricing except for stefan-benten (only the first node). So that is the reason for the difference.

Do you have details about that one combination? How much paid traffic and used space?

Do you mean this?

If not, tell me how to get that info.

Sorry at the moment I am low on time. I will have to look into it later today or maybe tomorrow.

I thought I noticed some differences last month as well, but didn’t have time to really dive in. (And it was pretty close anyway)

I do remember there was some weird glitch in the graphs on the web dashboard for the stefan-benten satellite earlier. Could it be related, maybe there is some faulty accounting for transfers on either the node side or satellite side? I’m not at home right now, but I’ll try to do a similar comparison between the numbers this weekend for my own node.

Alright, dove into it and found the following for my node.
Node ID: 12aYrWFmJqrmhN3zgkvANBTsj2DdLwf2aZC8T5t7CrNazHahKXW

image

I’ve added my own calculation based on node age and the results above as a personal note on etherscan.


These results are fairly similar to what I saw last month. Most are really close. But the stefan-benten satellite seems to have a payout that is outside expected fluctuations in STORJ pricing.

The difference is still not that big, but it does suggest something may have occurred to make the node have different stats than the sat for that one. So as with @donald.m.motsinger’s results there may be a slight discrepancy on that one satellite.

I’m sure this is all in good faith though. I’m not complaining, just providing some extra info in case it may be helpful to find possible issues.

1 Like

I found one reason:

2019-11-21T19:37:30.732Z        ESC[34mINFOESC[0m       piecestore      piecestore/endpoint.go:472      downloaded      {"Piece ID": "Q5ZGNOGJNGP44CVBNC3U6VTHRDHJRLVGVDDNWQM72H5MHNRP6VKQ", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZb
JjkfZJexMtSkmKxvvAW", "Action": "GET"}
2019-11-21T19:37:31.243Z        ESC[31mERRORESC[0m      piecestore      piecestore/endpoint.go:630      failed to add order     {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.
io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:625\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload.func3:288\n\tstorj.io/storj/storagenode/piecestore
.(*Endpoint).doUpload:297\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:176\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:830\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcser
ver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
storj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder
        /root/storj/storagenode/piecestore/endpoint.go:630
storj.io/storj/storagenode/piecestore.(*Endpoint).doUpload.func3
        /root/storj/storagenode/piecestore/endpoint.go:288
storj.io/storj/storagenode/piecestore.(*Endpoint).doUpload
        /root/storj/storagenode/piecestore/endpoint.go:297
storj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload
        /root/storj/storagenode/piecestore/endpoint.go:176
storj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1
        /root/storj/pkg/pb/piecestore2.pb.go:830
storj.io/drpc/drpcserver.(*Server).doHandle
        /root/go/pkg/mod/storj.io/drpc@v0.0.7-0.20191115031725-2171c57838d2/drpcserver/server.go:175
storj.io/drpc/drpcserver.(*Server).HandleRPC
        /root/go/pkg/mod/storj.io/drpc@v0.0.7-0.20191115031725-2171c57838d2/drpcserver/server.go:153
storj.io/drpc/drpcserver.(*Server).ServeOne
        /root/go/pkg/mod/storj.io/drpc@v0.0.7-0.20191115031725-2171c57838d2/drpcserver/server.go:114
storj.io/drpc/drpcserver.(*Server).Serve.func2
        /root/go/pkg/mod/storj.io/drpc@v0.0.7-0.20191115031725-2171c57838d2/drpcserver/server.go:147
storj.io/drpc/drpcctx.(*Tracker).track
        /root/go/pkg/mod/storj.io/drpc@v0.0.7-0.20191115031725-2171c57838d2/drpcctx/transport.go:51

This download will be unpaid because my storage node didn’t store the order and will not submit it back to the satellite.

1 Like

I have a such errors too
Docker version:

2019-11-20T18:17:59.358Z        ERROR   orders  archiving orders        {"error": "ordersdb error: database is locked","errorVerbose":"ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).archiveOne:238\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive:202\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches.func2:213\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches:237\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func1:164\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

2019-11-20T18:18:01.337Z        ERROR   piecestore      failed to add order     {"error": "ordersdb error: database is locked","errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:625\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:379\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:176\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:830\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

Windows version:

2019-11-11T02:07:59.603+0300    ERROR   piecestore      failed to add order     {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:648\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:379\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:176\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:830\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:163\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:141\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:102\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:135\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2019-11-11T02:10:59.351+0300    ERROR   piecestore      failed to add order     {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:648\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload.func4:607\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:632\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:396\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func2:838\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:163\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:141\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:102\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:135\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2019-11-11T02:11:07.527+0300    ERROR   piecestore      failed to add order     {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:648\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:379\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:176\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func1:830\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:163\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:141\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:102\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:135\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

Count (since 2019-11-01): 20

Since 2019-10-05 I found 239 counts of

2019-10-19T11:37:43.500Z        ERROR   piecestore      failed to add order     {"error": "ordersdb error: disk I/O error", "errorVerbose": "ordersdb error: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.*Endpoint).saveOrder:635\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload.func4:594\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:619\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:378\n\tstorj.io/storj/pkg/pb._Piecestore_Download_Handler:1096\n\tstorj.io/storj/pkg/server.*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.*Server).handleStream:1178\n\tgoogle.golang.org/grpc.*Server).serveStreams.func1.1:696"}

          &  

2019-10-28T20:34:35.496Z        ERROR   piecestore      failed to add order     {"error": "ordersdb error: database is locked", "errorVerbose": "ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:639\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload.func3:279\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:288\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:162\n\tstorj.io/storj/pkg/pb._Piecestore_Upload_Handler:1070\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696"}

:frowning:

I figured I’d offer some more feedback on this months payouts. It seems satellites have consistently paid out more than I was expecting.

image
us-central-1 ($0.8118; 75%:$0.6088; surgex4: $2.4352)
Received: 34.83595734 ($4.01)
europe-west-1 ($1.8233; 50%:$0.9117; surgex4: $3.6468)
Received: 59.54780419 ($7.16)
asia-east-1 ($0.0005; 50%:$0.0003; surgex4: $0.0012)
Received: 0.01197882 ($0.00)
stefan-benten ($19.3797; 75%:$14.5348; surgex4: $58.1392)
Received: 545.69426414 ($65.59)

Etherscan can now show estimated value at time of transaction. That’s what I used for the dollar values for the payouts. Obviously not complaining about getting more, just providing feedback.

I haven’t been keeping track of who’s node is in which month but… The payout seems to make sense if your node is in the 7-9 month escrow period considering the 4x surge payment.

Are we still talking about 12aYrWFmJqrmhN3zgkvANBTsj2DdLwf2aZC8T5t7CrNazHahKXW ? US-Central is telling me that this node joined 2019-02-28. I have double checked this edge case last month because my node joined on 2019-01-31. If you join on the last day of a month we count this month for the hold back calculation even if you didn’t manage to get any data. We are fine with this edge case.

Congratualtions in terms of the hold back calculation you are in month 10 with 0% hold back. That should explain the difference in payout.

1 Like

And one other detail.
Depending on the date you joined you still get 5x surge pricing. My understanding was that 4x level and lower are for the nodes that would join now. Should I double check it or should we just take the money and not question it :slight_smile:

4 Likes

Haha, might wanna double check that. I mean I won’t say no to the money, but I’m mostly trying to understand where the numbers came from. It was indeed still about node ID 12aYrWFmJqrmhN3zgkvANBTsj2DdLwf2aZC8T5t7CrNazHahKXW. Correcting for the additional month I actually get numbers that are in some cases further away from what I got before. Assuming this applies for both us-central-1 and stefan-benten which were active at the time.

us-central-1 ($0.8118; surgex4: $3.2472) => got closer
Received: 34.83595734 ($4.01)
europe-west-1 ($1.8233; 50%:$0.9117; surgex4: $3.6468) => no change
Received: 59.54780419 ($7.16)
asia-east-1 ($0.0005; 50%:$0.0003; surgex4: $0.0012) => no change
Received: 0.01197882 ($0.00)
stefan-benten ($19.3797; surgex4: $77.5188) => got further away
Received: 545.69426414 ($65.59)

For europe-west-1 and asia-east-1 I assumed the first month I got payout from those were the start of the 9 escrow months, though I guess it’s possible my node connected with them in prior months as well, despite not receiving data.

Playing around with the numbers a bit, I think this may be the most likely scenario. Assuming you are correct and x5 surge is still in effect. This would suggest only us-central-1 actually counts February as the first month. And europe-west-1 is actually already in 75% payout for some reason. But no matter what numbers I choose for stefan-benten, the amount seems a bit off.

us-central-1 ($0.8118; surgex5: $4.0590)
Received: 34.83595734 ($4.01)
europe-west-1 ($1.8233; 75%:$0.9117; surgex5: $6.8373)
Received: 59.54780419 ($7.16)
asia-east-1 ($0.0005; 50%:$0.0003; surgex5: $0.0015)
Received: 0.01197882 ($0.00)
stefan-benten ($19.3797; 75%:$14.5348; surgex5: $72.674‬)
Received: 545.69426414 ($65.59)

You don’t have to do further checks for my benefit, just providing the info in case you want to double check things on your end. I’m happy either way. :slight_smile:

Please double-check.

Am I understanding you right, that my node is in the same month for each satellite, even though the node didn’t have contact to all satellites in month 1?

Yes that is correct. The other satellites joined the network later. They have different start dates for all storage nodes. If needed I can get the earliest date per satellite.

3 Likes

Needed is a strong word, but it would be appreciated. :slight_smile: I appreciate the detailed responses a lot already!

1 Like

US-Central: 2019-01-31
Stefan: 2019-03-03
EU-West: 2019-05-31
Asia-East: 2019-06-10

2 Likes

Thanks! That exactly confirms the numbers I found playing around. Looks like the surge payout is indeed still x5 for existing users. Only difference that remains is that the stefan-benten payout is roughly 10% below expected. Last month I saw a similar but slightly smaller difference.

These are all errors related to orders in my log for November.

2019-11-19T03:20:35.449Z        ERROR   orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      failed to settle orders {"error": "order: unable to connect to the satellite: rpccompat: context deadline exceeded", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context deadline exceeded\n\tstorj.io/storj/storagenode/orders.(*Service).settle:256\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-11-19T03:20:35.449Z        ERROR   orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      failed to settle orders {"error": "order: unable to connect to the satellite: rpccompat: context deadline exceeded", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context deadline exceeded\n\tstorj.io/storj/storagenode/orders.(*Service).settle:256\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-11-19T03:20:35.449Z        ERROR   orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      failed to settle orders {"error": "order: unable to connect to the satellite: rpccompat: context deadline exceeded", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context deadline exceeded\n\tstorj.io/storj/storagenode/orders.(*Service).settle:256\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-11-19T03:20:35.449Z        ERROR   orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW       failed to settle orders {"error": "order: unable to connect to the satellite: rpccompat: context deadline exceeded", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context deadline exceeded\n\tstorj.io/storj/storagenode/orders.(*Service).settle:256\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2019-11-21T18:20:27.969Z        ERROR   orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW       failed to settle orders {"error": "order: unable to connect to the satellite: rpccompat: context deadline exceeded", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context deadline exceeded\n\tstorj.io/storj/storagenode/orders.(*Service).settle:256\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Pretty sure those won’t be an issue as those orders will be resent.

Given the results I’ve seen from garbage collection this month, I’m now fairly certain this small discrepancy is caused by the garbage data that was still on my node, but no longer paid.

This is further evidenced by the disk space used graph that shows a significant drop in TB*h per day since garbage collection did it’s thing.
image

This month should still see a difference, but a smaller one since it was fixed halfway through. I expect January will have all satellites paying exactly what the earnings calculator will show. I’ll be sure to report back then.

1 Like