Node Dashboard Never Finishes Loading

How is disk connected to your PC?
What the filesystem?
What the model of HDD? Is it a SMR (you can check there: PSA: Beware of HDD manufacturers submarining SMR technology in HDD's without any public mention)?

I have the same issue on my pi3… But for me the workaround is to wait until the dashboard appears after A few mins…
Would be nice when this can be fixed
(cpu and ram usage are at Max… and iowaits… )

Do you have a GUI on your OS?

Nope raspian lite only docker for storj/watchtower but i guess its the smr Drive (4tb 2.5" wd elements )
And vpn Server to access it

SMR… Yes it could be a reason. Especially right after start.

Could the others experiencing this issue (@kerwinc, @lordjynx, @nagolmas) provide their HDD information if convenient?

Edit: Information about the amount of memory allocated to your nodes could also be helpful.

Mine is a VM running in proxmox, 8GB ram with an old 1tb external USB drive. Definitely not SMR, but I thought I ran some health checks on it before deploying. I was going to use this until it got full then pickup a 10tb external and move it to a raspberry pi.


Docker (version 19.03.13, build 4484c46d9d) on Ubuntu 20.04 server with 32GB of RAM. Storage is on NFS share running FreeNAS.

I gave the docker image a 4TB chunk of available space.

In your case you might want to look at this post (and the thread in general).

NFS is not a supported protocol for Storj. The only supported network storage protocol is iSCSI.

2 Likes

I’ll move it then to a new iscsi share and let everyone know the results.

@baker
@Alexey
@nerdatwork
@moby

Thank you all and everyone else I missed!

I think that last point hit it… NFS issues possibly.

After moving the data — wow that was 1.2 million tiny files :slight_smile: — it’s been running now for close to 12 hours.

Should I be worried about the audit failures?

3 Likes

Keep an eye on your log for audit failures. You can search for GET_AUDIT and failed in same line. Slowly your audit % will go up but it won’t be 100% again.

What the protocol you have used to bind the disk to the VM?
In case of any Linux hypervisor the disk must be connected either as a virtual disk or as iSCSI (the virtual disk at the end).

The NFS and SMB are not supported.

1 Like

USB passthrough to the VM. I haven’t had any troubles for the first 2 months

Perhaps its throughput was enough until new functions have been implemented and put a more load on the storage.
But hope it’s fixed in the coming release:

Been running here now great on iSCSI on a ZFS zvol (freenas/truenas on this node) currently for ~128 hours without issue.

Thanks again everyone who assisted!

1 Like

@Alexey @lordjynx

I believe the issues is the drive itself, it’s reporting a ton of bad sectors. It’s about 8 years old, 1 tb external usb 2.0 drive. I realize my setup wasn’t optimal, and my node is disqualified on all satellites. I guess I’ll start over with a new drive!

Thank you for letting know.
I’m sorry that you lost it. I hope the next one would be more durable

Here’s a screenshot of the SMART values just for fun

Yeah, 4,472 bad sectors isn’t good! If it’s only been on a year, it might have a warranty :slight_smile: