No Payments for Traffic till July?

Hey Guys,

I have investigated a little bit in my Nodes and my Payments.
And i have encountered an error in my Node and my Payments.
I have looked through the Payout Dashboard.

I have one Node where in all Month of 2020 before July (Current Period) the Traffic is not counted (all 0). This is not true there was hundreds of GB Egress transferred.

I thought this is an display-error, but if I look thorugh the ERC20-Transactions to my Wallet I was only payed for the Storage and there are NO Payments for the Traffic.
Its the same by the Held-Amount. The Traffic is simply ignored and the held-amount therefore smaller.

Whats going on here?
Its a full Node with 3.1TB.


On my other Nodes there seems to be no Problem.

NodeID: 1tyZwo1Vv8bZePbGJg1REfTW6YHXiM1KHou7uqMAMyfXnCfQ8P
Version running: 1.6.4

Edit: In January and February there are Payments and tracked Egress. Somewhere in march it has been stopped.

Tracking for Current Period should be OK:

Example of April:
Payment Dashboard says 1,89$ paid, which is closely the amount of storj i get on my wallet.
But earningy.py sais i should have been get 8$ in April (including sourge). I did not get this.

Thanks for any help :wink:

I have kind of the same issue however for me it’s the disk space used that month that isn’t displayed in the payout dashboard. After checking my eth wallet it looks like the full amount has been paid so for me it’s just a display error.

I suggest contacting the support directly and providing the node id so they can check. If the earnings.py displays a different amount than the transactions you got, it should be investigated.

Thank you very much, I have submitted a Ticket.
We will see :wink:
I will post an update here, what happend.

Update: They created an internal issue yesterday.
They will contact me if there is an update.
I will wait and have a look what happen at the next switch of the payment cycle (july to august). If the egress from july disappear or if it tracks good now.

2 Likes

Please, send your logs to the ticket.

Thank you for your help.
Unfortunately I cant find a lot of logs.
I find only an old Log-File from December 2019.
Is there a special location with an additional logfile?
The docker-logs unfortunately will be erased after every new start (or update).
Would it help if i start logging in an external logfile by now?

Thanks!

We have received no bandwidth settlements from your node since May 2nd, and there were 0 during all of April as well.
It looks like something may be going wrong with your node’s order submission.
Maybe it would be good to write the logs to a file which will not be erased.
In any case, if you restart your node, seeing some logs around orders would be very helpful in figuring out what is going on!

1 Like

Ok, that sounds interesting and strange^^
You mean you still receive no bandwidth settlements? Means the current egress will still disappear?

I will start logging, maybe we will find something :slight_smile:

Current on my Node:

Ok after collecting Logs for a day now i see a lot of perfect entries, but there are two errors which may be a hint to my problem:

ERROR piecestore download failed {"Piece ID": "YJDV5FMPFRPH6GDWAADYUG5OUOSLPMOH5KWH54S3GD3AN3LCNXEQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "error": "trust: rpccompat: context canceled", "errorVerbose": "trust: rpccompat: context canceled\n\tstorj.io/common/rpc.Dialer.dialTransport:310\n\tstorj.io/common/rpc.Dialer.dial:267\n\tstorj.io/common/rpc.Dialer.DialNodeURL:177\n\tstorj.io/storj/storagenode/trust.Dialer.func1:51\n\tstorj.io/storj/storagenode/trust.IdentityResolverFunc.ResolveIdentity:43\n\tstorj.io/storj/storagenode/trust.(*Pool).GetSignee:143\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:134\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:467\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

and

WARN orders DB contains invalid marshalled orders {"error": "ordersdb error: database disk image is malformed", "errorVerbose": "ordersdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).ListUnsentBySatellite:169\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders:177\n\tstorj.io/storj/storagenode/orders.(*Service).Run.func1:134\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

and

WARN orders some unsent order aren't in the DB {"error": "order not found: order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: 442HRDF7YJEGRPCPVXCR7HFWXI; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: YXQAP6BCHVECLAIRJS6ECOT4GE; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: 5YKPUTPFB5HONDUXXHMXMMN5NE; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: WYJKHJ6745EMNH672UFWY7PZIM; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: 6ALSQOXVPJCSBAG2M4E4VOJAVQ; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: MX74ZVOAKNGDBERGFYYOAGA6Q4; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: Y5NCPVBUQRHLDPOKLZDQY4UT3I; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: 77LJ75TDVNHQ7FKOFWZBCTRDBU; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: LFIE5E67NJBXLG6G6NZIK7HXGI; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: GFZXWYVQQBDSRKFQCWQ3NKFWBE; order not found: satellite: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, serial number: 5JOYAEJ7JJAHLIFK43DY2K22V4; order not found: satellite: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, serial number: 3LLOVWDC4RFXBMZQAFLCOSYNF4", "errorVerbose": "order not found: order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: 442HRDF7YJEGRPCPVXCR7HFWXI; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: YXQAP6BCHVECLAIRJS6ECOT4GE; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: 5YKPUTPFB5HONDUXXHMXMMN5NE; order not found: satellite: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, serial number: WYJKHJ6745EMNH672UFWY7PZIM; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: 6ALSQOXVPJCSBAG2M4E4VOJAVQ; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: MX74ZVOAKNGDBERGFYYOAGA6Q4; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: Y5NCPVBUQRHLDPOKLZDQY4UT3I; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: 77LJ75TDVNHQ7FKOFWZBCTRDBU; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: LFIE5E67NJBXLG6G6NZIK7HXGI; order not found: satellite: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, serial number: GFZXWYVQQBDSRKFQCWQ3NKFWBE; order not found: satellite: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, serial number: 5JOYAEJ7JJAHLIFK43DY2K22V4; order not found: satellite: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, serial number: 3LLOVWDC4RFXBMZQAFLCOSYNF4\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive.func1:193\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Archive:213\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches.func2:238\n\tstorj.io/storj/storagenode/orders.(*Service).handleBatches:262\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func1:189\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

There are these three entries and than thousands of successful downloads logged.

Next step: I will try to repair the database.

Edit: Shame on me i realize that I saw this error a long time ago (could be around the beginning of may) and try to fix it. I realized that the repair-process of my orders.db with a size of nearly 2,5GB will last a loooooooong time, even on my SSD-Space.
I skipped it cause the node was - like i saw on the dashboard - running smoothly.

No I started the process 30min ago, processed 10,5MB of 2,4GB. I will let it run, we will see.
Fortunately there is no punishment caused by downtime till now xD

Yep, thx, the process is running :slight_smile:
Could this the reason for the missing egress tracking?
Could this Problem lead to disqualification?

Thx!

Yes because the orders were lost due to db corruption :arrow_down:

There are only 2 ways to get DQ

  1. Failing audits (check your logs for download failed AND GET_AUDIT together)
  2. Downtime (its not currently in effect)

Fortunately it looks like the problem may have been isolated to the orders database. Across all satellites, your node looks good. Good audit reputation, not suspended or disqualified.
With the DB fixed you should start seeing, and getting paid for, egress again

3 Likes

Database is fixed now and Node is running smoothly :slight_smile:
Can you see if there are new submitted bandwith settlements?

Thank you all very much for your help :wink:

4 Likes

It looks like we started seeing bandwidth orders from your node again at 07/17/20 04:00 UTC :slight_smile:

5 Likes