Memory consuption

Hi All,

After upgrade to Node Version: v0.14.11 I see very high momory consuption:

I believe it depend on Sqlite database size (my info.db is 5.6GB)
Can I ask you, is it normal or it bug?

Same for me, the usage increase seems to slow down though.

Memory leak or some sort of internal cache?

Here is detailed information from container:

My memory usage is over 7GB now

Hi! Can I ask you about what size of your info.db ?

Around 6gb currently

Thanks @BrightSilence !

For what it’s worth, it looks like memory consumption is not growing from that point on. So I don’t think we’re talking about a memory leak here. For my system this is not a problem as I have 16GB which is mostly unused. But I wonder how low RAM systems are faring.

It stopped growing for me at around 8.7GB

Then again, incoming traffic also almost stopped…

@BrightSilence the same for me, I just take care about systems with low RAM. If it “by design”, no problem, I already changed RAM from 6GB to 16GB. I think we should waiting for change log of Node Version: v0.14.11

Thanks @Pentium100 !
It really helpful!

2019-07-11T03:14:07.039Z INFO piecestore upload failed {“Piece ID”: “MI6ODMZYH5VQZVDZUBIQQG2HVJ2Q45NGR2JMYN7O54AG2VHX3C7A”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT_REPAIR”, “error”: “piecestore protocol: unexpected EOF”, “errorVerbose”: “piecestore protocol: unexpected EOF\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:241\n\tstorj.io/storj/pkg/pb._Piecestore_Upload_Handler:602\n\tstorj.io/storj/pkg/server.logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1209\n\tgoogle.golang.org/grpc.(*Server).handleStream:1282\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:717”}
2019-07-11T03:14:07.111Z ERROR piecestore protocol: unexpected EOF
storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:241
storj.io/storj/pkg/pb._Piecestore_Upload_Handler:602
storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
google.golang.org/grpc.(*Server).processStreamingRPC:1209
google.golang.org/grpc.(*Server).handleStream:1282
google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-11T03:14:18.565Z INFO piecestore upload started {“Piece ID”: “5UX5P7WEROQU5FY5A5RQ43VWVQECIQ5NJTWBRTCJ3N7MLDQKMDBQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:21.602Z INFO piecestore uploaded {“Piece ID”: “5UX5P7WEROQU5FY5A5RQ43VWVQECIQ5NJTWBRTCJ3N7MLDQKMDBQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:28.976Z INFO piecestore upload started {“Piece ID”: “DA43NMNIHMH2UMBLDKH7OC7PTNUHXTFYFRWRSD6R2TB3RIZLBZCA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:30.530Z INFO piecestore upload started {“Piece ID”: “BKCRBBVA27L44JJOHXRVMAQLCOLAVSBQVYA5UL5ZVI5N4S7RTYLQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:31.655Z INFO piecestore uploaded {“Piece ID”: “DA43NMNIHMH2UMBLDKH7OC7PTNUHXTFYFRWRSD6R2TB3RIZLBZCA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:36.259Z INFO piecestore uploaded {“Piece ID”: “BKCRBBVA27L44JJOHXRVMAQLCOLAVSBQVYA5UL5ZVI5N4S7RTYLQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-11T03:14:39.905Z INFO piecestore download started {“Piece ID”: “J62XLUEEISM26W43DOU454UV3IZLCDTU2ZIPUKZAKFJZOIQS27TQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET”}
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x9627bf, 0x16)
/usr/local/go/src/runtime/panic.go:617 +0x5c
runtime.sysMap(0x2e800000, 0x2800000, 0x106d670)
/usr/local/go/src/runtime/mem_linux.go:170 +0xac
runtime.(*mheap).sysAlloc(0x105d2b0, 0x2626000, 0x18378, 0x3d1d8)
/usr/local/go/src/runtime/malloc.go:633 +0x184
runtime.(*mheap).grow(0x105d2b0, 0x1313, 0x0)
/usr/local/go/src/runtime/mheap.go:1222 +0x2c
runtime.(*mheap).allocSpanLocked(0x105d2b0, 0x1313, 0x106d680, 0xfac81c)
/usr/local/go/src/runtime/mheap.go:1150 +0x3f4
runtime.(*mheap).alloc_m(0x105d2b0, 0x1313, 0x3f2d0100, 0x11cdc)
/usr/local/go/src/runtime/mheap.go:977 +0xd0
runtime.(*mheap).alloc.func1()
/usr/local/go/src/runtime/mheap.go:1048 +0x3c
runtime.(*mheap).alloc(0x105d2b0, 0x1313, 0x10100, 0x1c01ea0)
/usr/local/go/src/runtime/mheap.go:1047 +0x60
runtime.largeAlloc(0x2625a04, 0x1, 0x1)
/usr/local/go/src/runtime/malloc.go:1055 +0x7c
runtime.mallocgc.func1()
/usr/local/go/src/runtime/malloc.go:950 +0x38
runtime.systemstack(0x0)
/usr/local/go/src/runtime/asm_arm.s:354 +0x84
runtime.mstart()
/usr/local/go/src/runtime/proc.go:1153

Unfortunatelly, can confirm same problem. I have “only” 6GB.


After hit some treshold, swap usage growing and after that, computer is more like unusable :(.

Unfortunatelly, not possible stop docker container:

Jul 11 16:35:13 sonm dockerd[916]: time=“2019-07-11T16:35:13.379544817+02:00” level=info msg=“Container failed to stop after sending signal 15 to the process, force killing”
Jul 11 16:35:15 sonm dockerd[916]: time=“2019-07-11T16:35:14.942612395+02:00” level=error msg=“Handler for POST /v1.39/containers/storagenode/stop returned error: cannot stop container: storagenode: Cannot kill container 728a9c3f13f5ab80b5b9e41efd2467e6489793e9bbc4c5ec92a14705ad721b55: unknown error after kill: runc did not terminate sucessfully: fatal error: runtime: out of memory\n\nruntime stack:\nruntime.throw(0x561e84227c65, 0x16)\n\t/.GOROOT/src/runtime/panic.go:608 +0x74 fp=0x7ffc536ceb70 sp=0x7ffc536ceb40 pc=0x561e83d4b014\nruntime.sysMap(0xc000000000, 0x4000000, 0x561e849a0d18)\n\t/.GOROOT/src/runtime/mem_linux.go:156 +0xc9 fp=0x7ffc536cebb0 sp=0x7ffc536ceb70 pc=0x561e83d37109\nruntime.(*mheap).sysAlloc(0x561e84984300, 0x4000000, 0x0, 0x0)\n\t/.GOROOT/src/runtime/malloc.go:619 +0x1c9 fp=0x7ffc536cec38 sp=0x7ffc536cebb0 pc=0x561e83d2aef9\nruntime.(*mheap).grow(0x561e84984300, 0x1, 0x0)\n\t/.GOROOT/src/runtime/mheap.go:920 +0x44 fp=0x7ffc536cec90 sp=0x7ffc536cec38 pc=0x561e83d43604\nruntime.(*mheap).allocSpanLocked(0x561e84984300, 0x1, 0x561e849a0d28, 0x0)\n\t/.GOROOT/src/runtime/mheap.go:848 +0x339 fp=0x7ffc536cecd0 sp=0x7ffc536cec90 pc=0x561e83d43489\nruntime.(*mheap).alloc_m(0x561e84984300, 0x1, 0x2a, 0x0)\n\t/.GOROOT/src/runtime/mheap.go:692 +0x11d fp=0x7ffc536ced10 sp=0x7ffc536cecd0 pc=0x561e83d42c8d\nruntime.(*mheap).alloc.func1()\n\t/.GOROOT/src/runtime/mheap.go:759 +0x4e fp=0x7ffc536ced48 sp=0x7ffc536ced10 pc=0x561e83d7462e\nruntime.(*mheap).alloc(0x561e84984300, 0x1, 0x561e8301002a, 0x7ffc536cedb0)\n\t/.GOROOT/src/runtime/mheap.go:758 +0x8c fp=0x7ffc536ced98 sp=0x7ffc536ced48 pc=0x561e83d42f2c\nruntime.(*mcentral).grow(0x561e849860b8, 0x0)\n\t/.GOROOT/src/runtime/mcentral.go:232 +0x96 fp=0x7ffc536cede0 sp=0x7ffc536ced98 pc=0x561e83d36af6\nruntime.(*mcentral).cacheSpan(0x561e849860b8, 0x0)\n\t/.GOROOT/src/runtime/mcentral.go:106 +0x2fa fp=0x7ffc536cee28 sp=0x7ffc536cede0 pc=0x561e83d3664a\nruntime.(*mcache).refill(0x7fc3d8e68000, 0x220000002a)\n\t/.GOROOT/src/runtime/mcache.go:122 +0x99 fp=0x7ffc536cee58 sp=0x7ffc536cee28 pc=0x561e8
> Jul 11 16:35:15 sonm dockerd[916]: 3d36209\nruntime.(*mcache).nextFree.func1()\n\t/.GOROOT/src/runtime/malloc.go:749 +0x34 fp=0x7ffc536cee78 sp=0x7ffc536cee58 pc=0x561e83d73a14\nruntime.(*mcache).nextFree(0x7fc3d8e68000, 0x561e849a0d2a, 0x4000, 0x7fc3d8e68000, 0x7ffc536cef38)\n\t/.GOROOT/src/runtime/malloc.go:748 +0xb8 fp=0x7ffc536ceed0 sp=0x7ffc536cee78 pc=0x561e83d2b5a8\nruntime.mallocgc(0x180, 0x561e846963e0, 0x7ffc536cef01, 0x7fc3d8e6c000)\n\t/.GOROOT/src/runtime/malloc.go:903 +0x7a5 fp=0x7ffc536cef70 sp=0x7ffc536ceed0 pc=0x561e83d2bf05\nruntime.newobject(0x561e846963e0, 0x561e849a0d80)\n\t/.GOROOT/src/runtime/malloc.go:1032 +0x3a fp=0x7ffc536cefa0 sp=0x7ffc536cef70 pc=0x561e83d2c2fa\nruntime.malg(0x7fc300008000, 0x7fc3d8e68000)\n\t/.GOROOT/src/runtime/proc.go:3288 +0x33 fp=0x7ffc536cefe0 sp=0x7ffc536cefa0 pc=0x561e83d54743\nruntime.mpreinit(0x561e8497f620)\n\t/.GOROOT/src/runtime/os_linux.go:311 +0x2b fp=0x7ffc536cf000 sp=0x7ffc536cefe0 pc=0x561e83d493fb\nruntime.mcommoninit(0x561e8497f620)\n\t/.GOROOT/src/runtime/proc.go:624 +0xc5 fp=0x7ffc536cf038 sp=0x7ffc536cf000 pc=0x561e83d4dee5\nruntime.schedinit()\n\t/.GOROOT/src/runtime/proc.go:546 +0x8d fp=0x7ffc536cf0a0 sp=0x7ffc536cf038 pc=0x561e83d4dbad\nruntime.rt0_go(0x7ffc536cf1a8, 0xa, 0x7ffc536cf1a8, 0x0, 0x7fc3d82ac2e1, 0x40000, 0x7ffc536cf1a8, 0xad83ed508, 0x561e83d76580, 0x0, …)\n\t/.GOROOT/src/runtime/asm_amd64.s:195 +0x11e fp=0x7ffc536cf0a8 sp=0x7ffc536cf0a0 pc=0x561e83d766ae\n: unknown”
Only kill docker and next kill threads with storagenode.

info.db looks here:

drwx------ 6 root root       4096 jún  6 18:59 blob
-rw-r--r-- 1 root root 2261860352 júl 11 17:40 info.db
-rw-r--r-- 1 root root    5242880 júl 11 17:41 info.db-shm
-rw-r--r-- 1 root root 2683652672 júl 11 17:42 info.db-wal
drwx------ 2 root root      12288 júl 11 17:42 tmp
drwx------ 2 root root       4096 júl 11 17:34 trash

Can somebody explain for me, what is info.db-shm and info.db-wal files?

It should be nice have controll about memory consumption.

@Adavan

shm and wal files are created by SQLite when it’s run on Write-Ahead Log mode.

See https://www.sqlite.org/tempfiles.html

@ifraixedes Thank you. So, all this files i can consider as one DB, right?

Unfortunatelly, situation repeat again:

Attempting enforce memory limit via docker, hope that help.

Situation repeted:

I think it not “by design”, it issue.

yes, they are of the same Db.

I think too. As a temporary workaround i place to docker command line –memory=”4g”, testing, not sure, that help.
Let know later, or when storagenode reach this limit and stable working for few hours. Docker stats looks:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
b8e229e0bb32        storagenode         13.16%              2.45GiB / 4GiB      61.25%              12.7GB / 11GB       1.76GB / 78.6GB     45
996f1154f9f0        watchtower          0.00%               4MiB / 5.822GiB     0.07%               7.19kB / 0B         1.15GB / 0B         14

Not working :frowning:.
When storagenode reach memory limit forced by Docker, storagenode is restarted.
Anybody any other workaround?

Yes, I have, just put to cron docker restart storagenode -t 300
For me (I extended RAM to 16GB) it work every 6 hours, if more I have issue with out of memory.