Low bandwidth utilization


I would like to ask for some guidance on how to make my node perform better.
I have been running the node for 22 months on an optical line with 1gbit/350mbit down/up bandwidth.
Initially, a dedicated Windows machine received the storage via iscsi from a synology nas, now a month ago I moved it to the nas under the docker container and store the shared content natively.
Synology has two Exos X16 16TB disks in shr raid for 2x512 Enterprise SSD R/W (Intel s4610) cache. I know a single disk is enough, but this was already an existing setup.
There is no IO waiting, the upload and download error rate is around 0.5% in the evaluation of the storagenode log.
While browsing grafana, I see that the average utilization is 2mbps or less in both directions. I don’t understand why the usage is so weak. During the neighbor check, there is only me or a maximum of 2 nodes on my subnet. The service provider assigns a new IP address (pppoe) every week, which is why the neighbors may change.
Could you give me advice on how I could improve the utilization of my node?


The usage depends only on customers’ usage, not hardware.
The location plays some role, but not so much.

From the hardware perspective:

  • do not use SMR
  • do not use network attached storage (unless you run the node on this NAS), NFS, SMB, etc. are not supported, only iSCSI can work
  • It’s better to do not use USB drives and use internal drives if it’s possible
  • do not use BTRFS
  • do not use exFAT

That’s probably all. And of course RAID only for Storj makes no sense at all, unless you have it already.
See RAID vs No RAID choice

Roughly what I observe as well. Your setup is likely good enough.

Storj is not popular enough yet to require more bandwidth from nodes.

Thanks, currently synology shr mirror, btrfs is running.
Maybe then I’ll buy a big air one and break the raid, format one to ext4 basic and move to it.

Not needed. If you use Synology, they optimized btrfs to work normally. The external HDD could be SMR, so be careful.