Memory consuption

@Odmin thank you, but I think, that it do same thing. No difference, if Docker daemon restart storage node or Cron daemon do it.


Luckly, it seems, that no impact to transfer rate.

I looking for better solution, like restart storagenode, when reach allocated RAM ;).

Why is the arm container not being updated?

please update the arm docker hub image
root@raspberrypi:~# docker image history storjlabs/storagenode | head -2 && docker image history storjlabs/storagenode:arm | head -2
IMAGE CREATED CREATED BY SIZE COMMENT
f123141fb7a5 2 hours ago /bin/sh -c #(nop) ENV ADDRESS= EMAIL= WALLEā€¦ 0B
IMAGE CREATED CREATED BY SIZE COMMENT
36a672fc7d1f 47 hours ago /bin/sh -c #(nop) ENV ADDRESS= EMAIL= WALLEā€¦ 0B
root@raspberrypi:~#

Hmm, this is reason, why RPi owners not have this problem - not have this new version ;).

well, you know, i still have the problem where my stuff crashed all the time, have not read up on the new problem.

I know, build my node from spare parts :wink: . But have a redudancy, when some component goes to ā€¦
So, we must be patient for better solution.

Unfortunately itā€™s not the same, when we have out of memory issue OOM killer will kill processes, it will definitely damage your DB soon.
Restart command will sent sign term request and waiting for processes will finishing correctly (-t 300).
So I prefer controlled restart then uncontrolled killing :slight_smile:

I know, this solution is awful, but I just wanna survive until this issue will fixed.

My apologize, maybe some missunderstand occured.

I not waiting to exhausting RAM capacity, but have set to container limit for RAM usage (see above). I think but I not sure and donā€™t want wrangling with you, Docker gracefully restart storagenode container, because in log see only restart, no killing occured.

{"log":"2019-07-12T12:56:11.530Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009downloaded\u0009{\"Piece ID\": \"XAUC2CS4PAMA564D5VQKQ5XBHRLRXAGMDGUFNS34NYTI3K46OWCA\", \
"SatelliteID\": \"118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW\", \"Action\": \"GET\"}\n","stream":"stderr","time":"2019-07-12T12:56:11.530837633Z"}
{"log":"2019-07-12T12:56:11.627Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009downloaded\u0009{\"Piece ID\": \"ONFSNUA6SQZCLY2JSDDY5VCKWBMUC4AQXXBDWUSYE5S4JL6DUAFA\", \
"SatelliteID\": \"118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW\", \"Action\": \"GET\"}\n","stream":"stderr","time":"2019-07-12T12:56:11.627247828Z"}
{"log":"2019-07-12T12:56:11.629Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009download started\u0009{\"Piece ID\": \"BRFTEZYJEPMZ4ZAY7P4AIRZYZII6HA2QKQUIYATMNFCWZDM2BT4
Q\", \"SatelliteID\": \"118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW\", \"Action\": \"GET\"}\n","stream":"stderr","time":"2019-07-12T12:56:11.629998066Z"}
{"log":"2019-07-12T12:56:11.777Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009upload started\u0009{\"Piece ID\": \"NF36COOJZTYGI37HVO5ON6PPJ52D2OGGXIJ3V5PPGPUK6S3ZKN5A\
", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"PUT\"}\n","stream":"stderr","time":"2019-07-12T12:56:11.777486093Z"}
{"log":"2019-07-12T12:56:14.388Z\u0009\u001b[34mINFO\u001b[0m\u0009Configuration loaded from: /app/config/config.yaml\n","stream":"stderr","time":"2019-07-12T12:56:14.388941
211Z"}
{"log":"2019-07-12T12:56:14.434Z\u0009\u001b[34mINFO\u001b[0m\u0009Operator email: XXX \n","stream":"stderr","time":"2019-07-12T12:56:14.434348525Z"}
{"log":"2019-07-12T12:56:14.434Z\u0009\u001b[34mINFO\u001b[0m\u0009Operator wallet: 0xXXX5\n","stream":"stderr","time":"2019-07-12T12:56:
14.434366667Z"}
{"log":"2019-07-12T12:56:15.618Z\u0009\u001b[34mINFO\u001b[0m\u0009running on version v0.14.11\n","stream":"stderr","time":"2019-07-12T12:56:15.618744162Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009db.migration\u0009Latest Version\u0009{\"version\": 7}\n","stream":"stderr","time":"2019-07-12T12:56:15.67
2249051Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009vouchers\u0009Checking vouchers\n","stream":"stderr","time":"2019-07-12T12:56:15.672432675Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Node 12ApJ4xCsbyLZR6WnVoUJpcYoiCX9Mi895sHQxVK4E35dZbbLQt started\n","stream":"stderr","time":"2019-07-12T1
2:56:15.672450345Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Public server started on [::]:28967\n","stream":"stderr","time":"2019-07-12T12:56:15.672497207Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Private server started on 127.0.0.1:7778\n","stream":"stderr","time":"2019-07-12T12:56:15.672511817Z"}
{"log":"2019-07-12T12:56:15.793Z\u0009\u001b[34mINFO\u001b[0m\u0009running on version v0.14.11\n","stream":"stderr","time":"2019-07-12T12:56:15.793830998Z"}
{"log":"2019-07-12T12:56:15.817Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009download started\u0009{\"Piece ID\": \"MCPPTPNWCVUPOLNQE4JGR4RMLWUC2QQRXHQVKKQSCZ5WNZ3JWCI
A\", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"GET\"}\n","stream":"stderr","time":"2019-07-12T12:56:15.817761702Z"}
{"log":"2019-07-12T12:56:15.988Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore:orderssender.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW\u0009sending\u0009{\"count\": 1
88}\n","stream":"stderr","time":"2019-07-12T12:56:15.989092895Z"}
{"log":"2019-07-12T12:56:15.988Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore:orderssender.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6\u0009sending\u0009{\"count\": 1}\n","stream":"stderr","time":"2019-07-12T12:56:15.989124405Z"}
{"log":"2019-07-12T12:56:15.989Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore:orderssender.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\u0009sending\u0009{\"count\": 1}\n","stream":"stderr","time":"2019-07-12T12:56:15.989135578Z"}
{"log":"2019-07-12T12:56:15.989Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore:orderssender.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\u0009sending\u0009{\"count\": 411}\n","stream":"stderr","time":"2019-07-12T12:56:15.989144413Z"}
{"log":"2019-07-12T12:56:16.196Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009upload started\u0009{\"Piece ID\": \"PHGLZLJLHSUKP6PDDZZ2RDWLW7ZFIHH25GW55RWMLSUKMKULPHJA\", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"PUT\"}\n","stream":"stderr","time":"2019-07-12T12:56:16.197194024Z"}
{"log":"2019-07-12T12:56:17.673Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009upload started\u0009{\"Piece ID\": \"7UASWHM7MWHLDLV34P3INTOFQ3R6GSL3CLPD3LEYALJYT7JV6H5A\", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"PUT\"}\n","stream":"stderr","time":"2019-07-12T12:56:17.673372531Z"}
{"log":"2019-07-12T12:56:18.422Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009upload started\u0009{\"Piece ID\": \"AJLSYFYCXYQEOLPUM43EOF5UXH6X5U6LTUBEYVJG7E3OH4GN4Z2A\", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"PUT\"}\n","stream":"stderr","time":"2019-07-12T12:56:18.422875621Z"}
{"log":"2019-07-12T12:56:18.702Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore:orderssender.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\u0009finished\n","stream":"stderr","time":"2019-07-12T12:56:18.70213708Z"}

This workaround, I think, is better, because not have time dependency. Last run was long about 10 hours, but in future should be shorter or longer, dont know. Or can allocate more memory and time betwwen restarts should be longer.

Still looking for better solution, do some restarts per day is by my opinion very bad :frowning: .

  * */2 * * * docker stop storagenode && sleep 3m && docker start storagenode

punch crontab -e b4 that with the root user if you havenā€™t tied the pi user to docker mgmt

on the 1gb pis the crash occurs ~<5hrs. use this. i mean, at this point all our assess are getting disqualified either way

also, donā€™t blame me if the container still dont stop when this executes xd, but that lovely OOM

I strongly reccomend use timeout, docker will waiting only 10sec by default, the it kill all process inside container!
As result, you can destroy your info.db

1 Like

@Odmin I wrote longer message, where iI explain, why is set memory limit do container better, but was hidden and marked as a spam.

So, what happened when container reach this limit?

will add a 3 min timeout, but 3x times now the container stop will not finish at all. that only happens when i step in when itā€™s crashed already and try to stop it. let see if the container can stop as expected b4 the crash. i am also clearly having all the other issues like the
upload rejected, too many requests {ā€œlive requestsā€: 7} and
ERROR untrusted: unable to get signee: trust:: node not found

{"log":"2019-07-12T12:56:11.777Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009upload started\u0009{\"Piece ID\": \"NF36COOJZTYGI37HVO5ON6PPJ52D2OGGXIJ3V5PPGPUK6S3ZKN5A\
", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"PUT\"}\n","stream":"stderr","time":"2019-07-12T12:56:11.777486093Z"}
{"log":"2019-07-12T12:56:14.388Z\u0009\u001b[34mINFO\u001b[0m\u0009Configuration loaded from: /app/config/config.yaml\n","stream":"stderr","time":"2019-07-12T12:56:14.388941
211Z"}
{"log":"2019-07-12T12:56:14.434Z\u0009\u001b[34mINFO\u001b[0m\u0009Operator email: XXX \n","stream":"stderr","time":"2019-07-12T12:56:14.434348525Z"}
{"log":"2019-07-12T12:56:14.434Z\u0009\u001b[34mINFO\u001b[0m\u0009Operator wallet: 0xXXX5\n","stream":"stderr","time":"2019-07-12T12:56:
14.434366667Z"}
{"log":"2019-07-12T12:56:15.618Z\u0009\u001b[34mINFO\u001b[0m\u0009running on version v0.14.11\n","stream":"stderr","time":"2019-07-12T12:56:15.618744162Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009db.migration\u0009Latest Version\u0009{\"version\": 7}\n","stream":"stderr","time":"2019-07-12T12:56:15.67
2249051Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009vouchers\u0009Checking vouchers\n","stream":"stderr","time":"2019-07-12T12:56:15.672432675Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Node 12ApJ4xCsbyLZR6WnVoUJpcYoiCX9Mi895sHQxVK4E35dZbbLQt started\n","stream":"stderr","time":"2019-07-12T1
2:56:15.672450345Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Public server started on [::]:28967\n","stream":"stderr","time":"2019-07-12T12:56:15.672497207Z"}
{"log":"2019-07-12T12:56:15.672Z\u0009\u001b[34mINFO\u001b[0m\u0009Private server started on 127.0.0.1:7778\n","stream":"stderr","time":"2019-07-12T12:56:15.672511817Z"}
{"log":"2019-07-12T12:56:15.793Z\u0009\u001b[34mINFO\u001b[0m\u0009running on version v0.14.11\n","stream":"stderr","time":"2019-07-12T12:56:15.793830998Z"}
{"log":"2019-07-12T12:56:15.817Z\u0009\u001b[34mINFO\u001b[0m\u0009piecestore\u0009download started\u0009{\"Piece ID\": \"MCPPTPNWCVUPOLNQE4JGR4RMLWUC2QQRXHQVKKQSCZ5WNZ3JWCI
A\", \"SatelliteID\": \"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs\", \"Action\": \"GET\"}\n","stream":"stderr","time":"2019-07-12T12:56:15.817761702Z"}

Hope, that should be better.

Hmm, all my posts are hidden by Akismet :frowning: . @Odmin please wait.

Yeah, this has to be fixed eventually. Iā€™ve got 2.2TB used, and burning through 12GB of RAM is a little excessive.

I have 2.3TB used and 13GB RAM consuming.

Btw, guys what are those graphic monitoring tools you using?