Hello!
I have symmetrical 1000Gbps internet speed, which is absolutely not being utilized by the node. Initially, the download speed was 15Mbps at most, but after filling 60% of the HDD, it dropped to 4Mbps. The upload speed, on the other hand, is a maximum of 2Mbps. Why is this happening? I am running from the CLI command
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 127.0.0.1:14002:14002
-e WALLET=“0xxxxxxxx”
-e EMAIL=“xxxxx@xxxx.xxx”
-e ADDRESS=“xxx.xxx.xxx.xxx:28967”
-e STORAGE=“9TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/mnt/storj/identity”,destination=/app/identity
–mount type=bind,source=“/mnt/storj”,destination=/app/config
–name storagenode storjlabs/storagenode:latest
Storage Node Dashboard ( Node Version: v1.105.4 )
======================
ID XXXXXXXXXXX
Status ONLINE
Uptime 89h0m23s
Available Used Egress Ingress
Bandwidth N/A 64.31 GB 35.18 GB 29.12 GB (since Jul 1)
Disk 17.17 GB 8.98 TB
But I tested it on a 1TB NVMe drive and experienced exactly the same speeds. The machine had 4 cores and 32GB RAM, and now it has an even better configuration, but the speeds are still the same. I don’t have a specific Storj satellite assigned, could that be the reason?
This is not mining, no matter how fast connection you have or how big your drive or array is, you will only get data if customers upload it and will get egress if customers download their files.
Storj was running upload tests until recently, but the tests are stopped for the weekend, so you only get data from customers. Here’s the traffic graph from my node:
(the dips happened because I restarted the node or garbage collector was running)
As you can see, the ingress has dropped a lot, hopefully it will resume on Monday,
120mbps was about the maximum you could get with the tests running.
What else could affect your speed:
New node - if the node is new, it is in the vetting period and gets less data unil it can stay online long enough to pass enough audits. Your node has 8TB, so it’s not new.
Subnet neighbors - To prevent multiple pieces of the same file ending up in the same physical location/on the same ISP/etc, Storj uses a filter that basically considers all nodes in the same /24 subnet as one big node. So, if you run multiple nodes with the same IP (or /24 subnet) each node gets less ingress (the total ingress remains the same). So, if someone else on your /24 has a node, you will get less data.
Performance problems - maybe the drive is too fragmented or a filewalker is running, reducing the performance at the time
However, I see that you only have 17GB of space available. Node will stop all ingress if it runs out of space and resume it when a customer deletes some of his files. Maybe your node is periodically running out of space and stopping?
There was a bandwidth limit in earlier days, but it’s not used anymore, thus N/A.
The only speed between customers (around the globe, not necessarily in the same city) and your node is matter, perhaps your node is far away from the customers, so your node loses races for pieces.
Speed to/from satellites doesn’t matter, the customers data is flowing directly between the customer (or their nearest gateway, if they uses S3) and your node.
I would also check the vetting status:
Is it possible, that your node actually full? It calculates a free space in the allocation, not on the disk. You allocated only 9TB and 9.2TB is already used. I guess your node is full already.
But also that’s mean that the stat on the dashboard is not updated yet. Either because of errors related to a databases and/or filewalkers or our bugs.
I would recommend to restart the node with the scan enabled on startup (it’s enabled by default) and wait until all filewalkers would finish the scan without any error, including errors related to the databases.