Unexpected fault address

Your wallet address is public information. So, posting it shouldn’t be a significant issue.

However, it’s generally a good idea to employ good “Operational Security” (OpSec) when posting online. This includes removing extraneous information that is not needed to understand or resolve a problem… or information that personally identifies an account or presence.

BTW: Were you able to get your node running again?

1 Like

Hello and thanks for your answer.

Sadly the node is still down. I have no clue on how to proceed. :frowning:

All stored data and configuration of the node should be located outside of the docker container, so there’s not much point in troubleshooting the container itself. It’s likely that your OS upgrade corrupted the docker image somehow.

So, you may want to stop and remove the watchtower container:

docker stop watchtower

and then

docker rm watchtower

Then you should be safe following the instructions on the storj installation pages for manual upgrade. And then go into more troubleshooting detail if the newly created container doesn’t function. If the newly created container seems to be working fine, then re-create the watchtower container per Automatic Updates in the Storj Documentation:

docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode watchtower --stop-timeout 300s --interval 21600

Ok I will try what you suggest and provide feedback. Thanks.

beast, I have done what you suggested, still after having restarted the node I get the same error. Please find here below everything I did.

fmas@delta:~$ docker stop watchtower
watchtower
fmas@delta:~$ docker rm watchtower
watchtower
fmas@delta:~$ docker stop -t 300 storagenode
storagenode
fmas@delta:~$ docker rm storagenode
storagenode
fmas@delta:~$ docker pull storjlabs/storagenode:beta
beta: Pulling from storjlabs/storagenode
Digest: sha256:46489ce5724cba8dbf252f882ee6e4fc05303788cd6e22666dab096e357d209c
Status: Downloaded newer image for storjlabs/storagenode:beta
docker.io/storjlabs/storagenode:beta
fmas@delta:~$ docker run -d --restart unless-stopped -p 28967:28967 -e WALLET="" -e EMAIL="" -e ADDRESS="" -e BANDWIDTH="5TB" -e STORAGE="800GB" --mount type=bind,source="/home/fmas/Documents/Storj/Identity/storagenode",destination=/app/identity --mount type=bind,source="/hdd/StorjShareV3",destination=/app/config --name storagenode storjlabs/storagenode:beta
77b9e4dcbeffab3323a5aeb25488eaf9dc078360463951a72a3a7d54c8bfeb8d
fmas@delta:~$ docker exec -it storagenode /app/dashboard.sh
2019-09-28T13:21:37.825Z	INFO	Configuration loaded from: /app/config/config.yaml
2019-09-28T13:21:37.857Z	INFO	Node ID: 1VyLHATWP4fNTrCdFX3GKqkHRAyz7Y77RSXQaT3ysrTvLx8fq4
2019-09-28T13:21:37.905Z	FATAL	Unrecoverable error	{"error": "transport error: connection error: desc = \"transport: error while dialing: dial tcp 127.0.0.1:7778: connect: connection refused\"", "errorVerbose": "transport error: connection error: desc = \"transport: error while dialing: dial tcp 127.0.0.1:7778: connect: connection refused\"\n\tstorj.io/storj/pkg/transport.DialAddressInsecure:31\n\tmain.dialDashboardClient:37\n\tmain.cmdDashboard:66\n\tstorj.io/storj/pkg/process.cleanup.func1.2:264\n\tstorj.io/storj/pkg/process.cleanup.func1:282\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:73\n\tmain.main:296\n\truntime.main:203"}
fmas@delta:~$ 

I have shut down the computer and restarted. Same error.

The logs look as the following:

fmas@delta:~$ docker logs --tail 100 storagenode
	/usr/local/go/src/database/sql/sql.go:1065 +0xfb
created by database/sql.OpenDB
	/usr/local/go/src/database/sql/sql.go:723 +0x193
2019-09-28T13:32:30.989Z	INFO	Configuration loaded from: /app/config/config.yaml
2019-09-28T13:32:31.015Z	INFO	Operator email: 
2019-09-28T13:32:31.015Z	INFO	operator wallet: 
unexpected fault address 0x7f5b46026000
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x7f5b46026000 pc=0xb174f1]

goroutine 1 [running]:
runtime.throw(0xef76d1, 0x5)
	/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc000162998 sp=0xc000162968 pc=0x4335e2
runtime.sigpanic()
	/usr/local/go/src/runtime/signal_unix.go:391 +0x455 fp=0xc0001629c8 sp=0xc000162998 pc=0x448e35
github.com/boltdb/bolt.(*DB).page(...)
	/go/pkg/mod/github.com/boltdb/bolt@v1.3.1/db.go:796
github.com/boltdb/bolt.(*DB).mmap(0xc00036a1e0, 0x0, 0x0, 0x0)
	/go/pkg/mod/github.com/boltdb/bolt@v1.3.1/db.go:282 +0x251 fp=0xc000162a88 sp=0xc0001629c8 pc=0xb174f1
github.com/boltdb/bolt.Open(0xc0001323c7, 0x15, 0x180, 0xc000162b98, 0xc0001aa3c0, 0xc000085400, 0xc000162bb0)
	/go/pkg/mod/github.com/boltdb/bolt@v1.3.1/db.go:230 +0x2ae fp=0xc000162b50 sp=0xc000162a88 pc=0xb16ebe
storj.io/storj/storage/boltdb.New(0xc0001323c7, 0x15, 0xefd33b, 0xb, 0x2, 0x2, 0xc000362220)
	/go/src/storj.io/storj/storage/boltdb/client.go:41 +0x7f fp=0xc000162c30 sp=0xc000162b50 pc=0xb2732f
storj.io/storj/pkg/revocation.newDBBolt(0xc0001323c7, 0x15, 0xc0001323c0, 0x4, 0xc0001323c7)
	/go/src/storj.io/storj/pkg/revocation/common.go:52 +0x4e fp=0xc000162c80 sp=0xc000162c30 pc=0xb73dbe
storj.io/storj/pkg/revocation.NewDB(0xc0001323c0, 0x1c, 0xe, 0xc0002ed560, 0x1c)
	/go/src/storj.io/storj/pkg/revocation/common.go:34 +0x1bf fp=0xc000162ce0 sp=0xc000162c80 pc=0xb73caf
storj.io/storj/pkg/revocation.NewDBFromCfg(...)
	/go/src/storj.io/storj/pkg/revocation/common.go:21
main.cmdRun(0x1888ae0, 0xc000197ad0, 0x0, 0xd, 0x0, 0x0)
	/go/src/storj.io/storj/cmd/storagenode/main.go:143 +0x521 fp=0xc000163280 sp=0xc000162ce0 pc=0xc29711
storj.io/storj/pkg/process.cleanup.func1.2(0x104aae0, 0xc0001c5c20)
	/go/src/storj.io/storj/pkg/process/exec_conf.go:264 +0x13b fp=0xc000163318 sp=0xc000163280 pc=0xaec53b
storj.io/storj/pkg/process.cleanup.func1(0x1888ae0, 0xc000197ad0, 0x0, 0xd, 0x0, 0x0)
	/go/src/storj.io/storj/pkg/process/exec_conf.go:282 +0x17df fp=0xc000163d50 sp=0xc000163318 pc=0xaeddcf
github.com/spf13/cobra.(*Command).execute(0x1888ae0, 0xc000197930, 0xd, 0xd, 0x1888ae0, 0xc000197930)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762 +0x460 fp=0xc000163e28 sp=0xc000163d50 pc=0x62cbb0
github.com/spf13/cobra.(*Command).ExecuteC(0x1888880, 0xc0000a4120, 0x1, 0x1)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852 +0x2ea fp=0xc000163ef8 sp=0xc000163e28 pc=0x62d5ea
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec(0x1888880)
	/go/src/storj.io/storj/pkg/process/exec_conf.go:73 +0x17f fp=0xc000163f48 sp=0xc000163ef8 pc=0xae8f0f
main.main()
	/go/src/storj.io/storj/cmd/storagenode/main.go:296 +0x2d fp=0xc000163f60 sp=0xc000163f48 pc=0xc2af8d
runtime.main()
	/usr/local/go/src/runtime/proc.go:203 +0x21e fp=0xc000163fe0 sp=0xc000163f60 pc=0x434f7e
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000163fe8 sp=0xc000163fe0 pc=0x463541

goroutine 5 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 26 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b46220f30, 0x72, 0x0)
	/usr/local/go/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc000135718, 0x72, 0x0, 0x0, 0xef9734)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000135700, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1f8
net.(*netFD).accept(0xc000135700, 0xc000065cc0, 0xc000102e00, 0x7f5b48519008)
	/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc00030a660, 0xc000065cf0, 0x4110a8, 0x30)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc00030a660, 0xe60d20, 0xc000156900, 0xdb22e0, 0x187c0a0)
	/usr/local/go/src/net/tcpsock.go:261 +0x47
net/http.(*Server).Serve(0xc0000fa2a0, 0x1044fa0, 0xc00030a660, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2896 +0x286
storj.io/storj/pkg/process.initDebug.func2(0xc0002c5440, 0x1044fa0, 0xc00030a660, 0xc000085b80)
	/go/src/storj.io/storj/pkg/process/debug.go:52 +0x15d
created by storj.io/storj/pkg/process.initDebug
	/go/src/storj.io/storj/pkg/process/debug.go:50 +0x38f

goroutine 28 [chan receive]:
storj.io/storj/pkg/process.Ctx.func1(0xc0002c5680, 0xc000326d50)
	/go/src/storj.io/storj/pkg/process/exec_conf.go:89 +0x41
created by storj.io/storj/pkg/process.Ctx
	/go/src/storj.io/storj/pkg/process/exec_conf.go:88 +0x1b6

goroutine 29 [select]:
database/sql.(*DB).connectionOpener(0xc0001aa3c0, 0x104a820, 0xc000085f80)
	/usr/local/go/src/database/sql/sql.go:1052 +0xe8
created by database/sql.OpenDB
	/usr/local/go/src/database/sql/sql.go:722 +0x15d

goroutine 30 [select]:
database/sql.(*DB).connectionResetter(0xc0001aa3c0, 0x104a820, 0xc000085f80)
	/usr/local/go/src/database/sql/sql.go:1065 +0xfb
created by database/sql.OpenDB
	/usr/local/go/src/database/sql/sql.go:723 +0x193
2019-09-28T13:32:39.654Z	INFO	Configuration loaded from: /app/config/config.yaml
2019-09-28T13:32:39.677Z	INFO	Operator email: 
2019-09-28T13:32:39.677Z	INFO	operator wallet: 
fmas@delta:~$ 

I am starting to think that the last updates have messed up with my current ubuntu installation and that a reinstall is necessary. This because I am not that good with linux at all, most probably by having more knowledge, one could understand what has gone wrong during the update.

Thanks for any help you can provide.

Does this mount point exist in your Ubuntu /etc/fstab ?

On your Ubuntu OS what is the output of:

cat /etc/fstab

and

sudo fdisk -l

and

blkid

Please find the results here below:

fmas@delta:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=5b5fe65d-681e-4e52-b0a9-bcbaf13399a8 /               ext4    errors=remount-ro 0       1
/swapfile                                 none            swap    sw              0       0
/dev/sdb1				  /hdd		  ext4 	  defaults          0       0
fmas@delta:~$ sudo fdisk -l
Disk /dev/loop0: 14.8 MiB, 15462400 bytes, 30200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 1008 KiB, 1032192 bytes, 2016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 34.6 MiB, 36216832 bytes, 70736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 88.7 MiB, 92983296 bytes, 181608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop4: 624 KiB, 638976 bytes, 1248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop5: 3.7 MiB, 3825664 bytes, 7472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop6: 149.9 MiB, 157192192 bytes, 307016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop7: 4.2 MiB, 4403200 bytes, 8600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x84f6626f

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1  *     2048 625141759 625139712 298.1G 83 Linux


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0006f3ba

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 3907028991 3907026944  1.8T 83 Linux
blkid

 

Disk /dev/loop8: 3.7 MiB, 3878912 bytes, 7576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop9: 149.9 MiB, 157184000 bytes, 307000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop10: 54.4 MiB, 57065472 bytes, 111456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop11: 140.7 MiB, 147501056 bytes, 288088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop12: 89 MiB, 93327360 bytes, 182280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop13: 54.4 MiB, 57069568 bytes, 111464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop14: 14.8 MiB, 15462400 bytes, 30200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop15: 956 KiB, 978944 bytes, 1912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop16: 140.7 MiB, 147501056 bytes, 288088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop17: 4 MiB, 4218880 bytes, 8240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop18: 42.8 MiB, 44879872 bytes, 87656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
fmas@delta:~$ blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/sda1: UUID="5b5fe65d-681e-4e52-b0a9-bcbaf13399a8" TYPE="ext4" PARTUUID="84f6626f-01"
/dev/sdb1: UUID="e9b9434f-e817-43a9-b450-6978ff9ab120" TYPE="ext4" PARTUUID="0006f3ba-01"
/dev/loop8: TYPE="squashfs"
/dev/loop9: TYPE="squashfs"
/dev/loop10: TYPE="squashfs"
/dev/loop11: TYPE="squashfs"
/dev/loop12: TYPE="squashfs"
/dev/loop13: TYPE="squashfs"
/dev/loop14: TYPE="squashfs"
/dev/loop15: TYPE="squashfs"
/dev/loop16: TYPE="squashfs"
/dev/loop17: TYPE="squashfs"
/dev/loop18: TYPE="squashfs"
fmas@delta:~$ 

I do not know what loops are, but it does not seem right to me that I have so many.

Loops are a special device name for mounting a file as a filesystem. You’ll note that each /dev/loopX mount point is listed as squashfs

Squashfs is a compressed filesystem. Usually it’s used for special purpose OSes. For example, LibreElec is distributed as a squashfs bootable image. A normal desktop GNU/Linux OS is unlikely to contain squashfs images by default. It’s unclear why there are some many mounted compressed filesystems…

However, it does seem like you have your larger storage drive located at /hdd/

So, the next step is to make sure that you’ve got the correct directory and that your root filesystem is not too full.

This should list your the top level directories of your larger storage drive:

sudo ls -l /hdd/*

And this should list the space left on each mounted drive:

sudo df -H

fmas@delta:~$ sudo ls -l /hdd/*
[sudo] password for fmas: 
/hdd/lost+found:
ls: reading directory '/hdd/lost+found': Input/output error
total 0

/hdd/StorjShareV3:
total 328
-rwxrwxrwx 1 fmas fmas    752 Jul 22 08:02 config.yaml
-rw------- 1 root root  32768 Sep 28 18:51 kademlia
-rwxrwxrwx 1 fmas fmas 524288 Sep 27 22:37 kademlia.bak
-rwxrwxrwx 1 fmas fmas  32768 Sep 27 18:27 revocations.db
drwxrwxrwx 7 fmas fmas   4096 Sep 27 18:29 storage
-rwxrwxrwx 1 fmas fmas   4774 Jul 25 16:03 successrate.sh
fmas@delta:~$ ^C
fmas@delta:~$ sudo df -H
Filesystem      Size  Used Avail Use% Mounted on
udev            2.1G     0  2.1G   0% /dev
tmpfs           414M  2.0M  412M   1% /run
/dev/sda1       314G  9.8G  289G   4% /
tmpfs           2.1G  169M  1.9G   9% /dev/shm
tmpfs           5.3M  4.1k  5.3M   1% /run/lock
tmpfs           2.1G     0  2.1G   0% /sys/fs/cgroup
/dev/loop0       16M   16M     0 100% /snap/gnome-characters/317
/dev/loop1      1.1M  1.1M     0 100% /snap/gnome-logs/61
/dev/loop2       37M   37M     0 100% /snap/gtk-common-themes/818
/dev/loop3       94M   94M     0 100% /snap/core/7396
/dev/loop4      656k  656k     0 100% /snap/nano-editor/1
/dev/loop5      4.0M  4.0M     0 100% /snap/gnome-system-monitor/100
/dev/loop7      4.5M  4.5M     0 100% /snap/gnome-calculator/501
/dev/loop8      4.0M  4.0M     0 100% /snap/gnome-system-monitor/57
/dev/loop6      158M  158M     0 100% /snap/gnome-3-28-1804/71
/dev/loop9      158M  158M     0 100% /snap/gnome-3-28-1804/67
/dev/loop10      58M   58M     0 100% /snap/core18/1144
/dev/loop11     148M  148M     0 100% /snap/gnome-3-26-1604/90
/dev/loop12      94M   94M     0 100% /snap/core/7713
/dev/loop13      58M   58M     0 100% /snap/core18/1098
/dev/loop14      16M   16M     0 100% /snap/gnome-characters/296
/dev/loop15     1.1M  1.1M     0 100% /snap/gnome-logs/73
/dev/loop16     148M  148M     0 100% /snap/gnome-3-26-1604/92
/dev/loop17     4.4M  4.4M     0 100% /snap/gnome-calculator/406
/dev/loop18      45M   45M     0 100% /snap/gtk-common-themes/1313
/dev/sdb1       2.0T  803G  1.1T  43% /hdd
tmpfs           414M   17k  414M   1% /run/user/121
tmpfs           414M   46k  414M   1% /run/user/1000
fmas@delta:~$ 

Besides the loops, I see that sda1 is the drive where I have installed the OS, whereas sdb1 is the drive dedicated to storjshare, which is correct.

This may indicate a hard drive in the beginning stages of crashing out.

Do you hear any odd noises from the drive?

And you may want to add the output of:

sudo tune2fs -l /dev/sdb1

It should be a detailed list of the drive’s filesystem information.

If you have a spare hard drive lying around, it might be a good idea to make a copy of your storage drive. You can make an image of the entire drive using dd or you can make sure you get all your storage node files using rsync

  1. dd method

dd if=/dev/sdb of=/dev/sdx status=progress

Where /dev/sdx is whatever external drive you connect…

  1. rsync method

rsync -a --progress /hdd/StorjShareV3 /mnt/external_drive/storage_node_files

Should be
dd if=/dev/sdb of=/dev/sdx status=progress

Oops!

Thanks!

fmas@delta:~$ sudo tune2fs -l /dev/sdb1
[sudo] password for fmas: 
tune2fs 1.44.1 (24-Mar-2018)
Filesystem volume name:   <none>
Last mounted on:          /app/config
Filesystem UUID:          e9b9434f-e817-43a9-b450-6978ff9ab120
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              122101760
Block count:              488378368
Reserved block count:     24418918
Free blocks:              284623668
Free inodes:              121746504
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      907
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Sun Jul 21 14:54:37 2019
Last mount time:          Sat Sep 28 15:30:13 2019
Last write time:          Sat Sep 28 15:30:13 2019
Mount count:              13
Maximum mount count:      -1
Last checked:             Sun Jul 21 14:54:37 2019
Check interval:           0 (<none>)
Lifetime writes:          573 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      378cc326-7ef2-4ef6-9e85-81a51237b942
Journal backup:           inode blocks
Checksum type:            crc32c
Checksum:                 0xa8a1152e
fmas@delta:~$ 

No strange noises from the drive. I do not think that drive is failing, but how can I be sure? I believe that access to the “lost & found” folder has always been impossible. If I remember well that folder was there since I formatted the drive and even if I tried to open it, it would not be possible. But I may very well be wrong.

In any case I have the feeling that the cause of the issue is more related to the updates that ubuntu did, than a possible hardware failure.

I would recommend to remove the current version of the docker and install it again with an official documentation: https://docs.docker.com/install/linux/docker-ce/ubuntu/

In order to completely unistall docker I will follow the following commands:

To completely uninstall Docker:

*Step 1*

```
dpkg -l | grep -i docker
```

To identify what installed package you have:

*Step 2*

```
sudo apt-get purge -y docker-engine docker docker.io docker-ce  
sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce  
```

The above commands will not remove images, containers, volumes, or user created configuration files on your host. If you wish to delete all images, containers, and volumes run the following commands:

```
sudo rm -rf /var/lib/docker /etc/docker
sudo rm /etc/apparmor.d/docker
sudo groupdel docker
sudo rm -rf /var/run/docker.sock
```

You have removed Docker from the system completely.

Source: https://askubuntu.com/questions/935569/how-to-completely-uninstall-docker

Do you agree?

Please, try it. And install the docker again after that.

Dear all,

sorry for the latency.

I have removed docker as described in my previous post. Here below what happened:

fmas@delta:~$ dpkg -l | grep -i docker
ii  docker-ce                                  5:19.03.2~3-0~ubuntu-bionic                  amd64        Docker: the open-source application container engine
ii  docker-ce-cli                              5:19.03.2~3-0~ubuntu-bionic                  amd64        Docker CLI: the open-source application container engine
fmas@delta:~$ sudo apt-get purge -y docker-engine docker docker.io docker-ce
[sudo] password for fmas: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package 'docker-engine' is not installed, so not removed
Package 'docker' is not installed, so not removed
Package 'docker.io' is not installed, so not removed
The following packages were automatically installed and are no longer required:
  aufs-tools cgroupfs-mount libllvm7 pigz
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
  docker-ce*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 109 MB disk space will be freed.
(Reading database ... 166359 files and directories currently installed.)
Removing docker-ce (5:19.03.2~3-0~ubuntu-bionic) ...
(Reading database ... 166349 files and directories currently installed.)
Purging configuration files for docker-ce (5:19.03.2~3-0~ubuntu-bionic) ...
Processing triggers for systemd (237-3ubuntu10.29) ...
Processing triggers for ureadahead (0.100.0-21) ...
ureadahead will be reprofiled on next reboot
fmas@delta:~$ sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package 'docker-engine' is not installed, so not removed
Package 'docker' is not installed, so not removed
Package 'docker.io' is not installed, so not removed
Package 'docker-ce' is not installed, so not removed
The following packages will be REMOVED:
  aufs-tools* cgroupfs-mount* libllvm7* pigz*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 66.1 MB disk space will be freed.
(Reading database ... 166346 files and directories currently installed.)
Removing aufs-tools (1:4.9+20170918-1ubuntu1) ...
Removing cgroupfs-mount (1.4) ...
Removing libllvm7:amd64 (1:7-3~ubuntu0.18.04.1) ...
Removing pigz (2.4-1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
(Reading database ... 166266 files and directories currently installed.)
Purging configuration files for aufs-tools (1:4.9+20170918-1ubuntu1) ...
Purging configuration files for cgroupfs-mount (1.4) ...
Processing triggers for systemd (237-3ubuntu10.29) ...
Processing triggers for ureadahead (0.100.0-21) ...
fmas@delta:~$ sudo rm -rf /var/lib/docker /etc/docker
fmas@delta:~$ sudo rm /etc/apparmor.d/docker
rm: cannot remove '/etc/apparmor.d/docker': No such file or directory
fmas@delta:~$ sudo groupdel docker
fmas@delta:~$ sudo rm -rf /var/run/docker.sock
fmas@delta:~$ 

After having done that I rebooted the system. Once freshly rebooted I followed the instructions to install docker again. It finished successfully. At this point I was ready to recreate my containers and started following the guide: link. As a run command I used the one I was using before, that is:

docker run -d --restart unless-stopped -p 28967:28967 -e WALLET="" -e EMAIL="" -e ADDRESS="" -e BANDWIDTH="5TB" -e STORAGE="800GB" --mount type=bind,source="/home/fmas/Documents/Storj/Identity/storagenode",destination=/app/identity --mount type=bind,source="/hdd/StorjShareV3",destination=/app/config --name storagenode storjlabs/storagenode:beta

But the error I am getting is always the same:

fmas@delta:~$ sudo docker exec -it storagenode /app/dashboard.sh
2019-09-29T18:23:15.074Z	INFO	Configuration loaded from: /app/config/config.yaml
2019-09-29T18:23:15.096Z	INFO	Node ID: 1VyLHATWP4fNTrCdFX3GKqkHRAyz7Y77RSXQaT3ysrTvLx8fq4
2019-09-29T18:23:15.097Z	FATAL	Unrecoverable error	{"error": "transport error: connection error: desc = \"transport: error while dialing: dial tcp 127.0.0.1:7778: connect: connection refused\"", "errorVerbose": "transport error: connection error: desc = \"transport: error while dialing: dial tcp 127.0.0.1:7778: connect: connection refused\"\n\tstorj.io/storj/pkg/transport.DialAddressInsecure:31\n\tmain.dialDashboardClient:37\n\tmain.cmdDashboard:66\n\tstorj.io/storj/pkg/process.cleanup.func1.2:264\n\tstorj.io/storj/pkg/process.cleanup.func1:282\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:73\n\tmain.main:296\n\truntime.main:203"}
fmas@delta:~$ 

I guess at this point my theory about the docker package being compromised is not valid, since, even having it freshly reinstalled, it does not seem to have solved the issue.

Should I proceed as @anon27637763 suggested? Migrate data to a different drive? I still have a 1TB drive in a different computer which I could use.

Thank you all for your efforts, it is really appreciated. Sadly tomorrow it is Monday already and since I will be at work most time of the day I will not have time to perform tests as quick as I would like to. :frowning:

Before you attempt to migrate the data, you may want to have a look at the real time docker event log when you attempt to start the storagenode…

You’ll need two terminal windows open side by side. Use one to monitor the events, and the other to start the node.

Terminal 1

EDIT:

sudo docker events --filter name=storagenode

Adding the filter may leave out important networking events. So, maybe leave that option off the command.

sudo docker events

Terminal 2

docker run -d --restart <the rest of your node options>

Maybe there will be some clues in the container event log.

However, even if you don’t need to migrate to a new storage drive, it’s a good idea to have a backup of the data.

Please, show your logs: docker logs --tail 100 storagenode