ERROR blobscache satPiecesTotal < 0 satPiecesContentSize < 0

My oldest node moved 20GB to the trash and at the same time logged around 10k of these errors:

2021-02-28T18:34:55.456Z        ERROR   blobscache      satPiecesTotal < 0      {"satPiecesTotal": -1792}
2021-02-28T18:34:55.456Z        ERROR   blobscache      satPiecesContentSize < 0        {"satPiecesContentSize": -1280}

They don’t appear now anymore but should I be worried about those?
After all, they are classified as errors, even though their message isn’t very helpful to me.

1 Like

all my us2 satellite data went to trash today around that time its may well be that but i did not see any error in the logs other that the satellite being down

1 Like

yeah all my us2 pieces got moved to trash too but an hour later and I only had 8GB on us2.

1 Like

I noticed the same errors in logs of two nodes

sls blobs W:\storagenode5\storagenode.log -Context 10,1 | select -Last 11

> W:\storagenode5\storagenode.log:511996:2021-09-25T03:28:27.680Z       ERROR   blobscache      piecesContentSize < 0 {"piecesContentSize": -20992}
> W:\storagenode5\storagenode.log:511997:2021-09-25T03:28:27.680Z       ERROR   blobscache      satPiecesTotal < 0 {"satPiecesTotal": -30208}
> W:\storagenode5\storagenode.log:511998:2021-09-25T03:28:27.681Z       ERROR   blobscache      satPiecesContentSize < 0 {"satPiecesContentSize": -29696}
  W:\storagenode5\storagenode.log:511999:2021-09-25T03:28:27.843Z       INFO    piecestore      upload started  {"Piece ID":"KAQWKNX5O54V6UOFGQEQ526CHVKQ4Z2HPUU7HWE57OFKX6ZYAWFQ", "Satellite ID":"12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "PUT", "Available Space": 58405191680}
  W:\storagenode5\storagenode.log:512000:2021-09-25T03:28:28.397Z       INFO    piecestore      uploaded        {"PieceID":"YMTUJ7HBB6VW3VV4SHEB5QOGJUNRVGY57KQQK7ZZEGY4S3T7EBWQ", "Satellite ID":"12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 14592}
> W:\storagenode5\storagenode.log:512001:2021-09-25T03:28:28.541Z       ERROR   blobscache      piecesTotal < 0 {"piecesTotal":-7680}
> W:\storagenode5\storagenode.log:512002:2021-09-25T03:28:28.541Z       ERROR   blobscache      piecesContentSize < 0 {"piecesContentSize": -7168}
> W:\storagenode5\storagenode.log:512003:2021-09-25T03:28:28.541Z       ERROR   blobscache      satPiecesTotal < 0 {"satPiecesTotal": -7680}
> W:\storagenode5\storagenode.log:512004:2021-09-25T03:28:28.541Z       ERROR   blobscache      satPiecesContentSize < 0 {"satPiecesContentSize": -7168}
> W:\storagenode5\storagenode.log:512005:2021-09-25T03:28:28.810Z       ERROR   blobscache      piecesTotal < 0 {"piecesTotal":-1280}
> W:\storagenode5\storagenode.log:512006:2021-09-25T03:28:28.810Z       ERROR   blobscache      piecesContentSize < 0 {"piecesContentSize": -768}
> W:\storagenode5\storagenode.log:512007:2021-09-25T03:28:28.810Z       ERROR   blobscache      satPiecesTotal < 0 {"satPiecesTotal": -1280}
> W:\storagenode5\storagenode.log:512008:2021-09-25T03:28:28.810Z       ERROR   blobscache      satPiecesContentSize < 0 {"satPiecesContentSize": -768}
  W:\storagenode5\storagenode.log:512009:2021-09-25T03:28:29.626Z       INFO    piecestore      upload started  {"PieceID":"4EEBZMMC4PIVKYFCKNBBPCDDXI2GNAXXS6NTPDTFCHCCMPWDDLSQ", "Satellite ID":"12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 58405175296}
1 Like

This also happen to me this morning
Try restart the node and it gone not sure what it is
Node version:1.102.3

1 Like

It’s not updated values in the database. The restart will trigger a used-space-filewalker, which should calculate the used space and update databases on successful finish (for each trusted satellite).

2024-07-17T15:43:35Z    ERROR   blobscache      satPiecesTotal < 0      {"Process": "storagenode", "satPiecesTotal": -3840}
2024-07-17T15:43:35Z    ERROR   blobscache      satPiecesContentSize < 0        {"Process": "storagenode", "satPiecesContentSize": -3328}

I got these lines in log, and the log is full of them, on node 108 ver, running startup filewalker, still unfinished, with 1TB allocated out of 22TB, to stop ingress. The disk has 4 TB free.
I stoped the node for the 3rd time, deleted databases and restarted again; maybe this time will finish the walk without any more useless errors filling the log.
I put on fatal: piecestore, collector and blobscache.

They are meaning the same - the data in the databases are way off with the real usage. You need a successful used-space-filewalker for all trusted satellites and no database errors during the process.