Docker container keeps stopping

I keep logging into my node every couple of days and checking if the docker process is still running. Every so often I keep noticing when running “docker ps” that my storagenode process is no longer running. When I ran the “docker pull storjlabs/storagenode:latest” it says that my image is up to date.

What could cause this to stop running?
Is there a way to make sure that my docker container always is running?

Hi Daniel Jay, welcome.

Usually when the docker container is stopping or restarting, this indicates a problem. Have you checked your logs for errors? Instructions can be found here:
https://documentation.storj.io/resources/faq/check-logs

2 Likes

bad connection to your storage media would be my guess… how are you storing the data / what kind of setup are you running?

to my knowledge that’s about the only thing that would actually stop the docker instance without relaunching it.

not including you sleepwalking docker stop storagenode commands :smiley:

1 Like

My storj node is a VM at my house and the storage is on the ESX virtual disk.

When I run the docker logs storagenode all I see so far are good entries. I will need to figure out if I can get all records before today.

1 Like

i’m not sure how brief of a disconnect will trigger it to shutdown the node, but i would expect thats whats happening… duno how relevant it would be if it’s in the same server… and ESXI is pretty solid so i cannot imagine thats the problem…

i would suspect you have maybe a NAS or some such thing and then connected to that over a network using that as storage while having the node host on another machine… this can lead to brief disconnects for whatever reason…

Networks are rarely perfectly stable, most often they are 99.9% or something like that… and then when it losses the connection for a few miliseconds the node may shut itself down… (duno the exact threshold. but the node emergency shutdown feature is kinda new so, it might be a bit over sensitive)

if you aren’t using something like a network connection to your storage you shouldn’t see something like this.

but there is often a lot of annoying trouble with running storage over network for the storagenodes…
if the storagenode isn’t huge yet and you can move it into the server the storagenode is running on, then that might be worth a shot…

else as Baker said… logs, i would assume the log would say why the storagenode shut down…

1 Like

I used to run nodes where the storage was actually on an NFS share. Sadly that kept having issues and was disqualified long ago. I created a whole new node where the VM and the storagenode storage were on the same physical disks.
Node has been running great for a long time and then recently started having issues. Already have almost 2TB stored on the node.

I guess I wait until it shuts down again and I will check the logs then.

1 Like

you logs should be saved since the last version update, you can just do this docker command and you will have exported the entire log to look at …

since you are doing NFS shares and EXSI i doubt i need to explain.
docker logs storagenode >& /tmp/stdout.log

then you search for a gap in time… i duno what it would end on but after the end comes the beginning… so you can search for any of the boot log messages and then see the end of the previous run… until the version switch cleared the docker logs… if you are on linux it was most likely 4 -5 days ago maybe a bit less and longer on windows…

1 Like