CPU usage high?

Please, check the second identity: https://documentation.storj.io/dependencies/identity#confirm-the-identity
Also, it should be unique (new) identity and new authorization token to sign it.

You are right. I generated a new cert but used the same email as my first node. I thought it would be fine but I think I needed to use a new email address? even though the key after the email address did not match my original auth token. It is up and running now

You do not need a different email address, but the different authorization token. Perhaps you didn’t authorize it before.

it’s actually not a bad idea… to put some sort of latency tracking on the pieces so that the software can detect when the disk falls behind… but then one is into the rejecting uploads and presently the only option would be to reject downloads because there are no noticeable ingress…

that means the data is inaccessible to the network, and thus if it falls below the threshold repair will be started, which storj wants to avoid…

so tho it’s a local problem it’s not a global problem… and fixing the local problem makes it a global problem i guess… so i suppose in theory not easy… maybe thats why it’s not been done yet

As far as I know, the --storage2.max-concurrent-requests option is only capping uploads (ingress).
It does not impact clients’ downloads, so data would still be available to the network.

duno how max concurrent works these days but when i used it 3-4 months ago it had a lot of issues… even if i ran it at 20-40 the incoming deletions and other such node work commands / cleanup or whatever it was would run into the max concurrent limit and cause the db locked issue… but i do believe a lot of effort has gone into smoothing out that…

yeah would make sense if it only affects uploads, uploads doesn’t matter someone else can take that…
i was happy to get away from it and tho i didn’t see it at the time, it was a cause to much grief… which vanished when i finally got my system running fast enough that the storage could keep up with whatever the storagenode was doing.

not like my setup was crazy slow to begin with… but had some bad configuration issues, like i was mixing sas and sata disks in the same vdev’s or arrays whatever we want to call it…

after i reduced my hdd latency and got max concurrent set to infinite then my system started to run nearly error free, maybe 1 pr 24hour on avg and sometimes days without a single one

Well this is interesting. My original node has just been disqualified. About 1 week after bringing up a new node on a separate machine but same IP

The disqualification could be only for failed audits. Audit can fail if storagenode is unable to provide a requested piece for audit, either because it’s lost, inaccessible, corrupted or 4x timeouts on the same piece.
Please, search for GET_AUDIT and failed in the same line in your logs.

Do you have the command I need to get the logs?

You can use scripts from this article:

I get zero results when I run docker logs storagenode 2>&1 | grep GET_AUDIT | grep failed

I do get results if I run docker logs storagenode 2>&1

Last night the node was showing offline. I left it on all day and the Storj node is now showing online but I am disqualified from one satellite. Can I leave it running or I should turn the node off?

The logs got deleted by default if you removed the container. To prevent this you can redirect logs to the file: https://documentation.storj.io/resources/faq/redirect-logs
So, now we will not see any errors if they were exist.
If your node is disqualified on all satellites, the only way is to start from scratch.
If your node is disqualified not on all satellites, you can continue run the storagenode, the remained satellites will continue to pay for service while trust your node.

However, it’s better to figure out what was the reason for failing audits.
Please, check your disk for errors first.