Symmetrical 1000Gbps or 20/2Mbps

Hello!
I have symmetrical 1000Gbps internet speed, which is absolutely not being utilized by the node. Initially, the download speed was 15Mbps at most, but after filling 60% of the HDD, it dropped to 4Mbps. The upload speed, on the other hand, is a maximum of 2Mbps. Why is this happening? I am running from the CLI command
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 127.0.0.1:14002:14002
-e WALLET=“0xxxxxxxx”
-e EMAIL=“xxxxx@xxxx.xxx”
-e ADDRESS=“xxx.xxx.xxx.xxx:28967”
-e STORAGE=“9TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/mnt/storj/identity”,destination=/app/identity
–mount type=bind,source=“/mnt/storj”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

Storage Node Dashboard ( Node Version: v1.105.4 )

======================

ID XXXXXXXXXXX
Status ONLINE
Uptime 89h0m23s

               Available         Used       Egress      Ingress
 Bandwidth           N/A     64.31 GB     35.18 GB     29.12 GB (since Jul 1)
      Disk      17.17 GB      8.98 TB

Internal 127.0.0.1:7778

Hello Marbi, welcome to the forum :slight_smile:
There can be many reasons.

  • Your node does not have enough CPU/RAM
  • Your HDD cannot keep up
  • Filewalkers are running, congesting your HDD for the moment
  • There is simply not more data to be served to you right now.

Although 20/2 does seem low for the time being, my money would be on slow HDD. What model are you using?

1 Like

CPU: 19.8% RAM: 23Gi/45Gi Cores: 8
IRON WOLF 20TB

But I tested it on a 1TB NVMe drive and experienced exactly the same speeds. The machine had 4 cores and 32GB RAM, and now it has an even better configuration, but the speeds are still the same. I don’t have a specific Storj satellite assigned, could that be the reason?

This is not mining, no matter how fast connection you have or how big your drive or array is, you will only get data if customers upload it and will get egress if customers download their files.

Storj was running upload tests until recently, but the tests are stopped for the weekend, so you only get data from customers. Here’s the traffic graph from my node:


(the dips happened because I restarted the node or garbage collector was running)

As you can see, the ingress has dropped a lot, hopefully it will resume on Monday,
120mbps was about the maximum you could get with the tests running.

What else could affect your speed:

  1. New node - if the node is new, it is in the vetting period and gets less data unil it can stay online long enough to pass enough audits. Your node has 8TB, so it’s not new.
  2. Subnet neighbors - To prevent multiple pieces of the same file ending up in the same physical location/on the same ISP/etc, Storj uses a filter that basically considers all nodes in the same /24 subnet as one big node. So, if you run multiple nodes with the same IP (or /24 subnet) each node gets less ingress (the total ingress remains the same). So, if someone else on your /24 has a node, you will get less data.
  3. Performance problems - maybe the drive is too fragmented or a filewalker is running, reducing the performance at the time

However, I see that you only have 17GB of space available. Node will stop all ingress if it runs out of space and resume it when a customer deletes some of his files. Maybe your node is periodically running out of space and stopping?

1 Like

wget -O /mnt/storj/500mb.bin http://noc.pirx.pl/500mb.bin
–2024-07-04 18:25:21-- http://noc.pirx.pl/500mb.bin
Resolving noc.pirx.pl (noc.pirx.pl)… 217.73.181.197
Connecting to noc.pirx.pl (noc.pirx.pl)|217.73.181.197|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 524288000 (500M) [application/octet-stream]
Saving to: ‘/mnt/storj/500mb.bin’

/mnt/storj/500mb.bin 100%[====================================================================================================>] 500.00M 89.1MB/s in 5.8s

2024-07-04 18:25:27 (86.8 MB/s) - ‘/mnt/storj/500mb.bin’ saved [524288000/524288000]

I performed a restart, increased the HDD to 10TB, and the test download is OK, but the Storj satellites are still sending at 2/2Mbps. :frowning:

I have 5 /24 address classes available, so I can add 4 more, but this is the first and only node.

Storage Node Dashboard ( Node Version: v1.105.4 )

======================

ID XXX
Status ONLINE
Uptime 57m6s

               Available         Used       Egress      Ingress
 Bandwidth           N/A     82.67 GB     39.39 GB     43.28 GB (since Jul 1)
      Disk       1.00 TB      9.00 TB

Internal 127.0.0.1:7778
External XXX:28967

And Bandwidth N/A <<< OMG

There was a bandwidth limit in earlier days, but it’s not used anymore, thus N/A.

The only speed between customers (around the globe, not necessarily in the same city) and your node is matter, perhaps your node is far away from the customers, so your node loses races for pieces.
Speed to/from satellites doesn’t matter, the customers data is flowing directly between the customer (or their nearest gateway, if they uses S3) and your node.
I would also check the vetting status:

Also please provide the result of the command:

df --si -T

df --si -T
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 25G 0 25G 0% /dev
tmpfs tmpfs 5.0G 1.3M 5.0G 1% /run
/dev/sda2 ext4 106G 15G 85G 16% /
tmpfs tmpfs 25G 0 25G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 25G 0 25G 0% /sys/fs/cgroup
/dev/loop0 squashfs 67M 67M 0 100% /snap/core20/1828
/dev/loop1 squashfs 97M 97M 0 100% /snap/lxd/24061
/dev/loop2 squashfs 53M 53M 0 100% /snap/snapd/18357
/dev/loop3 squashfs 68M 68M 0 100% /snap/core20/2318
/dev/loop4 squashfs 41M 41M 0 100% /snap/snapd/21759
/dev/sdb1 ext4 11T 9.2T 1.2T 89% /mnt/storj
tmpfs tmpfs 5.0G 0 5.0G 0% /run/user/1000

ping europe-west-1.tardigrade.io
SEQ HOST SIZE TTL TIME STATUS
0 34.159.134.91 56 113 28ms951us
1 34.159.134.91 56 113 28ms748us
2 34.159.134.91 56 113 28ms726us
3 34.159.134.91 56 113 28ms729us
4 34.159.134.91 56 113 28ms744us
sent=5 received=5 packet-loss=0% min-rtt=28ms726us avg-rtt=28ms779us max-rtt=28ms951us

traceroute 34.159.134.91
Columns: ADDRESS, LOSS, SENT, LAST, AVG, BEST, WORST, STD-DEV
2 91.233.112.78 0% 5 7.9ms 7.8 7.7 7.9 0.1
3 80.54.111.1 0% 5 8.2ms 12.7 7.9 30.9 9.1
4 195.149.239.58 0% 5 13.4ms 14.2 13.4 16.9 1.4
5 195.149.239.57 0% 5 13.5ms 13.6 13.5 13.8 0.1
6 195.149.232.62 0% 5 9.4ms 9.4 9.3 9.5 0.1
7 34.159.134.91 0% 5 28.5ms 28.5 28.5 28.5 0

I am using a Tier 3 ISP connected to an Internet Exchange Point in Europe

Is it possible, that your node actually full? It calculates a free space in the allocation, not on the disk. You allocated only 9TB and 9.2TB is already used. I guess your node is full already.
But also that’s mean that the stat on the dashboard is not updated yet. Either because of errors related to a databases and/or filewalkers or our bugs.
I would recommend to restart the node with the scan enabled on startup (it’s enabled by default) and wait until all filewalkers would finish the scan without any error, including errors related to the databases.