Zero Ingress for three days

I’ve had 0 Ingress for three days. No canceled upload. . . Nothing. Just downloads. My node runs on 1. 5. 2 and has been on the net since January. Have already rebooted the node. Anybody have an idea? That’s less Ingress, I understand, but 0?!? Could not find an upload attempt in the log for the last 3 days

How much free space does it show on dashboard ?

On dashboard it shows 337 GB left. I just restared storj and got this message wich explains, why I don`t get any uploads-> Used more space than allocated. Allocating space
Hm…Still 337 GB left…

Is it a Linux or Windows node ? How big is your HDD ? How much space have you allocated ?

Its a raspberry pi with an attached 3TB hdd. Attached by usb. I allocated 2,5 TB

Check how much space you have in your trash folder. I’m willing to bet it’s very close to 337 GB.

There was a huge purge of zombie segments on one satellite. Garbage-collection on your node moved them to the trash folder. This space is not shown on the dashboard, but the node takes it into account when deciding if it has enough space to store additional pieces. My guess is that most of that 337 GB is actually used by the trash folder.

I have at least two nodes showing the same situation: the node doesn’t appear full but isn’t accepting new uploads, and there’s 200-300GB in trash.

In less than 7 days, that space will be reclaimed and your node will accept new pieces again.

You are right. Thx for your help

there is an error in the space accounting atm… you may need to reboot the node to reset the space accounting…

mine is currently like 4 tb off on a 10 tb node that has been running for a week … it will say you have run out of space much sooner than you actually do…

i duno… worth a shot anyways… not like it can hurt to simply reboot the storagenode

if you are using cli check the available space using:

docker exec -it storagenode /app/

I have around of -0.8TB of deviation in measurements… and of course I am not getting a single byte.
Thanks God egress is good enough.


and this is not a problem that will get solved in 7 days… because i see this behaviour since one month ago or so.
This month: 6GB ingress vs 800GB egress

Running GE in stefan’s satellite would release 0.9TB.

1 Like

Thank you for that hint. Very interesting. I have - 17Gb diskspace.

i get this

Last Contact ONLINE
Uptime       174h17m37s

                   Available        Used       Egress     Ingress
     Bandwidth           N/A      2.6 TB     421.8 GB      2.2 TB (since Jun 1)
          Disk       13.9 TB     10.1 TB

however thats what i set it for … 24tb and 10.1 spent that seems about right…

then we look in the logs…

2020-06-23T20:23:15.541Z INFO piecestore upload started {"Piece ID": "GCZAM2GPQBAM63MCCQ23GIFS3XY4BLHJXLHHDETHTRGD4D4RTGVA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 7297376462059}

and i get that… which puts me at about way way way off… tho i must admit there isn’t 14tb free on the pool atm… but that aside there is still like 12… not 7 something billionbytes aka tb

tank/storagenodes  9.19T  11.9T     30.6K  /tank/storagenodes

the zfs numbers are usually spot on when compared to like du command

You should never assign more than available space. I mean, I’m sure you’ll catch it before it becomes a problem, but why take the risk.

There is a miscalculation where trash gets counted twice, which had a big impact. I’d restart the node (good opportunity to also lower the allocation to at max 90% of available space) and give it time to walk all the pieces in your storage to determine the actual used size. This may take a while. But hopefully after that the numbers will make a little more sense again.

it barely fits… might be a tb or two from… but i still need to replace a 3tb drive which is in a raidz1 with a set of 6tb’s so when i add the last one the total tb will go up by 6tb… so i just haven’t bothered changing it… will take months before it becomes a real concern…

but yeah you are absolutely right and i wouldn’t recommend anyone else do this either… xD

it might fit it might not… it will be very close… within 10% margin of error… and zfs can be a bit problematic in calculating space used exactly because of it’s compression and such… got like 4-6 different sizes depending on how i look at it and then on top of that comes the whole MiB, MB so i just plan to be well ahead and not get close to the limit…

but yeah main reason is to keep a note for me to remember to get the last 3tb drive changed for a 6tb one :smiley: