Since last week my 2.5TB (100% full) node has been eating 3GB of ram (it is all that is available on that system so… the system only have 8gig and ~3gig are available without storjnode running).
Just deleted the docker image and got a new one just to make sure, currently running v1.11.1 but it was already happening on earlier versions.
Any idea of what could be happening?
Edit: if I set a memory limit on docker, for example 1.5GB, after a few minutes the node stops working. The 3GB are filled in the first few minutes after the start.
Yeah, after the post I did that! Now seems fixed… orders.db was giving errors, just followed the steps to fix the db and seems to be better now, ~150MB of ram. Also, my orders.db was with ~1.2GB and now it ~990MB but seems a little to much maybe? Not worried about the size but…
Also, how is your HDD connected to the PC with storagenode and what the filesystem?
What OS?
I’m running the node on my Unraid machine, have an 8tb wd red connected to sata and xfs filesystem. I’ve been running the node on this machine since December last year and seems to working great!
Seems like I can close this since it is fixed! Well… This is not a github issue