No Uploads Activity for a Entire Day

UPDATE 2: My understanding of the labels in the log was backwards. So node was sending data to customers all along but the free space that was displayed on the dashboard was incorrect.

Every once in a while I will look at the live log as requests come in, lately I been noticing no activity for uploads at all. I traced it all the way back to the 1st of April. Anyone else seeing the same thing?

UPDATE: Dashboard was reporting 100 GB available storage space. I increased my allocation amount from 2 TB to 3 TB. I can now see uploads. Strange. Bug in the software? I can recreate the problem.

I noticed this entry in the Log, so I increased the allocation amount to see what would happen.
2020-04-02T16:17:15.363-0700 WARN piecestore:monitor Used more space than allocated. Allocating space {“bytes”: 2000000000000}

Should the software still complete uploads even if space runs out? In this scenario, it did not.


It’s possible for your node to be full even though the dashboard says you have space available. The dashboard has not shown the correct available disk space for well over a month . A user has reported this problem 45 days ago but as far as I know the Storj team has not made any indication that they are working on fixing this bug. I’m not sure if your specific issue is caused by this bug or another problem. The only way I know how to see the true available space is check the log file for a line that says “Action”: “PUT”, “Available Space”: xxxx} (where x is the free space in bytes) It only seems to show this line when an upload starts so if your node is full you won’t see that message since you won’t receive uploads. The number displayed in the dashboard as “available” seems to be roughly equal to the space used by trash + any actual free space available. so if you have no free space available the number shown is just the amount of junk in your nodes trash folder. At least that’s how it seems to work on my node. In your case that would mean you have 100Gb of trash in your trash folder which seems like a lot so I don’t know what’s going on there.

HDD was not full. What is strange is how upload activity to customers stopped completely. There be a time when the HDD will be full and that data must upload when customer requests it. So far it seems this bug would prevent customers to access there data. So this needs to be looked into.

Unless I am reading the upload and download labels wrong.

The Storj logs describe the traffic from the customers point of view instead of your node. Upload is from the customer to your node, also called ingress. Download is data that the customer is downloading from you, or egress. Any other traffic monitoring utility such as the one you posted a picture of, is probably using the words download and upload in the conventional sense instead of reversing them like in the storj logs.

Testing data have some specific rules, i.e. 1 download per each X uploads. If your node is full the test data traffic stops.
The remaining traffic is due to customers downloads.

I’m also not seing any egress (GET) at all in the last 48h.
On the other side, my nodes are receiving far more data than usual (PUT).
I suppose the testing satellite is testing the upload capacity of the network

Is there some place where we can find more information regarding the ongoing tests ?

So I must have gotten the understanding of the log labels wrong. I thought uploaded meant the customer was downloading data from node. I understand now the following.

download (GET) = Customer Downloading Data from node (Egress)
uploaded (PUT) = Customer Uploading Data to node (Ingress)

So the node was working correctly as in allowing customers to download data when node thinks it full.
Still, the dashboard counter was wrong.

no, it doesn’t matter. The testers are customers too

I’m seeing the exact opposite, almost only uploads in log, which means i’m downloading… so the OP should be uploading…

i just assume my node was finally vetted and was getting up to date or something.
also started about april 1st

maybe this particular log grab doesn’t perfectly mirror how it looks, i’ve gotten like 350gb in and sent about 1 gb out… :smiley: but out / egress is slowly going up… so most likely just the vetting being done in my case…

good to know i’m not the only one… been waiting for somebody to complain xD
not that i want to complain about getting 100gb+ ingress a day… its a good start

2020-04-03T10:07:16.455Z        INFO    piecestore      uploaded        {"Piece ID": "NLAUI2ZGQDDKL6P2NRN66DK2JAX4QO42ZPLMS2KCNC2J673LMIBQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT"}
2020-04-03T10:07:16.591Z        INFO    piecestore      upload canceled {"Piece ID": "6CIBPL34QEFI57XOF7IPYKD6F3ATFD67PLIDULMWHRT2GEYRLZCA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:988\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2020-04-03T10:07:16.667Z        INFO    piecestore      upload started  {"Piece ID": "KM6IGEWZM57KEPMQX2QUKP7WZ6FTR2O5AUWIB6DNW6SP7TIOEUAA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 4617776516850}
2020-04-03T10:07:16.719Z        INFO    piecestore      download started        {"Piece ID": "FLJDRID7EQB7U4Z2PB77SVWD5N4WKU64HYA2LN4OAXSOQCHVWYOQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:16.872Z        INFO    piecestore      downloaded      {"Piece ID": "FLJDRID7EQB7U4Z2PB77SVWD5N4WKU64HYA2LN4OAXSOQCHVWYOQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:17.086Z        INFO    piecestore      upload canceled {"Piece ID": "KM6IGEWZM57KEPMQX2QUKP7WZ6FTR2O5AUWIB6DNW6SP7TIOEUAA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:988\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2020-04-03T10:07:17.438Z        INFO    piecestore      upload canceled {"Piece ID": "2KULOEZEAWZXVZV7SRBKVX5KSOYFE7QX3ZSCG7W2UY4OM6ID3LFA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:988\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2020-04-03T10:07:17.440Z        INFO    piecestore      upload canceled {"Piece ID": "ECVKLHJVSWCMGA3Y7FKIBV6SIOMWMVERMYQRIG35Y7ZMKFLHYKVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:988\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2020-04-03T10:07:17.814Z        INFO    piecestore      upload started  {"Piece ID": "2CVY5MOH476FHCKZNENOHDJSQH3MBGEXK3WG66DJ2Y47ED6CETXQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617771321842}
2020-04-03T10:07:17.908Z        INFO    piecestore      upload started  {"Piece ID": "PRYUYD3I4I6LCHLXKGFIXZZH7FANWCM6CLTLKNNZ57XFTUD3FIHA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617771321842}
2020-04-03T10:07:18.759Z        INFO    piecestore      upload started  {"Piece ID": "2CLAYUBZQP4DTM5SDWFST6J4SNOO35AL34NERRPYIXR6PBF2JKAA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617771321842}
2020-04-03T10:07:19.508Z        INFO    piecestore      uploaded        {"Piece ID": "2CVY5MOH476FHCKZNENOHDJSQH3MBGEXK3WG66DJ2Y47ED6CETXQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT"}
2020-04-03T10:07:19.525Z        INFO    piecestore      upload started  {"Piece ID": "A7SVXKQII5GTCZ52IMDAS7W43N3ONFRJDY5SBB63TBO2FJKIXZ5A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 4617769001970}
2020-04-03T10:07:19.582Z        INFO    piecestore      download started        {"Piece ID": "KCR7K4P2DJ66H2NZ326EDF6KBPCFL7GIHNERPMYBCGRNKXFNPDOA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:20.496Z        INFO    piecestore      uploaded        {"Piece ID": "A7SVXKQII5GTCZ52IMDAS7W43N3ONFRJDY5SBB63TBO2FJKIXZ5A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT"}
2020-04-03T10:07:20.512Z        INFO    piecestore      downloaded      {"Piece ID": "KCR7K4P2DJ66H2NZ326EDF6KBPCFL7GIHNERPMYBCGRNKXFNPDOA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:20.666Z        INFO    piecestore      uploaded        {"Piece ID": "PRYUYD3I4I6LCHLXKGFIXZZH7FANWCM6CLTLKNNZ57XFTUD3FIHA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT"}
2020-04-03T10:07:20.711Z        INFO    piecestore      upload started  {"Piece ID": "J7Q4P5BYZ7DAMANO42URVTRGW7KPWYT5O7JHQWJNVGEEAMI6NAFA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617764661234}
2020-04-03T10:07:21.769Z        INFO    piecestore      upload started  {"Piece ID": "UKGZGYEX6ZDZSYLADR2M5DBMYBBLC2FVJ5OEE2WSY4WVOVGFGZLA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617764661234}
2020-04-03T10:07:22.013Z        INFO    piecestore      download started        {"Piece ID": "M2W53F5FN4ZLANIRWPQN5POWZUE4QZCWXC3PJTKKURIVXQ652L5A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:22.069Z        INFO    piecestore      upload started  {"Piece ID": "X25W4EX3NXRUTJQ5D7QNGQIXZVNBTA2JTG7D77WXWVR7ZQ65LXLA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Available Space": 4617764661234}
2020-04-03T10:07:22.397Z        INFO    piecestore      downloaded      {"Piece ID": "M2W53F5FN4ZLANIRWPQN5POWZUE4QZCWXC3PJTKKURIVXQ652L5A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2020-04-03T10:07:23.076Z        INFO    piecestore      uploaded        {"Piece ID": "2CLAYUBZQP4DTM5SDWFST6J4SNOO35AL34NERRPYIXR6PBF2JKAA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT"}
2020-04-03T10:07:23.566Z        INFO    piecestore      upload canceled {"Piece ID": "J7Q4P5BYZ7DAMANO42URVTRGW7KPWYT5O7JHQWJNVGEEAMI6NAFA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:988\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:199\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:173\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:124\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:161\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2020-04-03T10:07:23.835Z        INFO    piecestore      uploaded        {"Piece ID": "X25W4EX3NXRUTJQ5D7QNGQIXZVNBTA2JTG7D77WXWVR7ZQ65LXLA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT"}

Note: The satellite are already updated.

1 Like

I too am seeing a huge drop-off in egress after the 4/1 but you can see my ingress has doubled. Some other nodes I have with 700GB of data have only had 2GB of egress so far this month.

image

I’ve also got a total drop off in egress on the first and my ingress has increased a lot.