Node restarts when it "wants"

Hi all! I got debian 11.

trouble ls that node keeps restart once per day or one per hour, depend on his mood.

My run list is -

docker run -d --restart always --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002
-e WALLET=“–walletname----”
-e EMAIL=“—emailname-----”
-e ADDRESS=“------myadress---------”
-e STORAGE=“14.5TB”
–user $(id -u):$(id -g)
–log-opt max-size=50m
–log-opt max-file=10
–mount type=bind,source=“/root/.local/share/storj/identity/storagenode/”,destination=/app/identity --mount type=bind,source=“/media/diskForStorj/storjDataSpb1_14_5/”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

node was started 3 hours ago. after docker rm storagenode
still
docker ps says -

after that i look for logs

docker logs --tail 20 storagenode

this is normal behavior or something wrong?

update? if not node updating then something’s wrong.

Version stays the same. Don’t think there may be updade 24± time’s per day. :frowning:

Look at your system resource graphs and docker logs. It’s possible you have slow disk subsystem, node allocates to much memory and gets killed.

weird thing. I got 22:42:40 today, and I type docker logs --tail 10 storagenode and it tell me logs -
2023-06-25T19:42:09.651Z INFO piecestore uploaded {“process”: “storagenode”, “Piece ID”: “JM4R2JKBPVGGHLLOKCPZXDGGASGAKLLMBBCXAF2VRYZKTFFSNRSA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 768, “Remote Address”: “5.161.44.92:15278”}
2023-06-25T19:42:10.041Z INFO piecestore upload started {“process”: “storagenode”, “Piece ID”: “4K2YCJRXEHLJIES5Q7HUAGW2POX7WAWKEPCZKZ2TASMIH2GMZQTA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 14023581897088, “Remote Address”: “5.161.143.41:59982”}

logs seems to be alright, question is that now i got 23:03
docker logs --tail 10 storagenode
tells me logs -

2023-06-25T20:03:12.571Z INFO piecestore upload started {“process”: “storagenode”, “Piece ID”: “7G7I2WXVXDTKF2LBSDTF7JM5BXN7WR7NEGXWSYSZJSUSOPTI3SLA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 14023328369280, “Remote Address”: “184.104.224.99:21848”}
2023-06-25T20:03:12.760Z INFO piecestore uploaded {“process”: “storagenode”, “Piece ID”: “7G7I2WXVXDTKF2LBSDTF7JM5BXN7WR7NEGXWSYSZJSUSOPTI3SLA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Size”: 55296, “Remote Address”: “184.104.224.99:21848”}

that mean that docker container have some other timezone?

Yep, this is another time zone. Node was again restart 40 min ago. I calculated this and got this time - docker logs --tail 30000 storagenode | grep “ERROR”


errors and so on…
how may I fix it?

“Context Cancelled” does not need to be fixed: Error piercestore protocol: rpc error: code = canceled desc = context canceled - #2 by Alexey

Did you check resource graphs and docker (not container) logs? Something could also be in system logs (e.g. a/car/log/messages when process is forcefully killed).

What is your storage configuration, filesystem, and how much ram does the device have?

Side note — when posting logs, don’t post screenshots, they are impossible to read. Instead, paste the text itself between tripple backticks, like so:

```
Your log text here
```

This will come our looking

Your log text here

Which is much easier to read.

Try searching for FATAL
docker logs storagenode | grep “FATAL”

You need the last couple of lines before the restart

Yes you right. I got file system ext4 errors. I made fsck there was about 30 errors, (fixed)sure after that docker rm storagenode and now after start node again got 100% busy i/o wait

   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    974 be/4 root        7.10 M/s    0.00 B/s  0.00 % 99.99 % storagenode run --config-dir config --identity-~allet=0x47FC36aE6bBE
    983 be/4 root        7.38 M/s    0.00 B/s  0.00 % 97.86 % storagenode run --config-dir config --identity-~allet=0x47FC3617C9DbaE6bBE
    253 be/3 root        0.00 B/s  115.21 K/s  0.00 % 21.03 % [jbd2/vda-8]
    980 be/4 root      174.68 K/s   85.48 K/s  0.00 % 20.51 % storagenode run --config-dir config --identity-~allet=0x47FC36179183b07C9DbaE6bBE
    978 be/4 root       14.87 K/s   81.76 K/s  0.00 %  8.27 % storagenode run --config-dir config --identity-~allet=0x47FC3617023b07C9DbaE6bBE
    853 be/4 root        0.00 B/s    3.72 K/s  0.00 %  0.00 % dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    948 be/4 root        0.00 B/s  118.93 K/s  0.00 %  0.00 % storagenode run --config-dir config --identity-~allet=0x47FC3617029183b07C9DbaE6bBE
    987 be/4 root        0.00 B/s  118.93 K/s  0.00 %  0.00 % storagenode run --config-dir config --identity-~allet=0x47FC36170a9183b07C9DbaE6bBE
      1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init

My config is i5-3340 proxmox machine with 2 cores and 1 gb ram debian 11
Storj drive is sata → /dev/vda

This is normal on start. If you had enough ram, most metadata would have been cached in a first few minutes and iops load on drive would reduce.

This is not nearly enough for anything. You want to have enough free ram for caching to fit at least metadata in its entirety. If you run storj in separate VM then that 1GB is all it has, no caching is possible, and with a single drive you will immediately hit iops limit, storage node will be buffering, run out of ram and die. this won’t work.

If you run it in a container with shared ram — this could allow more efficient memory usage across containers. Give the vm it at least 8, ideally 16-24GB or ram.

If you run it in a container with shared ram — this could allow more efficient memory usage across containers. Give the vm it at least 8, ideally 16-24GB or ram.

yes I may give more ram but 2 questions:

  1. How much ram need for 14.5 TB node?
  2. How everyone runs storj nodes with raspberry on 1-2 gb ram?

I’m not an expert on ext4, but you would want to calculate amount of metadata that is needed for the number of files that will be stored, and provide at least that. Plus what storagenode process requires itself (under a gigabyte IIRC)

I don’t know anyone who does it beyond proof of concept and/or very small node and/or using some kind of SSD caching.

There is no magic, it’s very simple. Your hdd can serve 200 IOPS tops. This is theoretical maximum.

Now, every upload and download involves reading and writing metadata. If metadata is not cached and is located on the same HDD — this uses up valuable IOPS. Let’s say, generously, using up just half. This means your node can service 100 uploads and downloads per second combined, until hdd can no longer keep up. In this case node tries to buffer writes, waiting for IO, and eating up ram, further taking away from disk cache, making things worse and ultimately is killed by the OS (you still need to confirm that this is the culprit in your case).

There are ways to reduce the extra IO to minimum (remove sync, access time updates, etc; search forum, it has been discussed many time for variety of OSes and file systems) but ultimately, the more files your node stores, the larger amount of reads it must be able to serve, and less resources are left for writes.

Ideally, you want HDDs to only serve large sequential IO and small random IO shall be served from RAM or SSD.

You may want to try to configure your VM with 8GB of ram and run FreeBSD with ZFS pool comprised of your HDD and an SSD as a special device to exclusively service small files and metadata. This will allow you to scale much further on a single HDD than otherwise would have been possible with other filesystems.

For ext4 and no RAID and no disk/filesystem corruption and no virtualization, 1GB of RAM should be enough, however, if you can add more - please do it.
In the normal circumstances it uses a small amount of RAM, however, OS itself requires RAM to work and RAM for caching. So either do not use VM and run a docker container directly on the host, or add more RAM to the VM.

From my observations, ext4 metadata cache for typical Storj node files take around 1 GB of RAM per 1 TB of stored data. A theoretical estimate on this based on the size of metadata (around 300 bytes per file: inode + direntry, around 1 M to 3 M files per 1 TB, plus some overhead for data not laid out in contiguous way).

You can make it more efficient if you reduce the size of an inode at file system creation time—though this parameter cannot be changed if you already have data on your file system. If you don’t have enough RAM, you should look for some other means of caching metadata, for example an SSD-based cache, otherwise you risk running out of IOPS as your node grows.