Please guide the configuration to optimize the speed of node

Hi guys. I have started with storj since 03/2020. I dedicate the following resources to storj:

  • High speed internet connection 300 Mbps
  • A computer with i7 CPU and 16GB of RAM
  • 4 HDDs x 8TB = 32TB
    Currently I am running a node on windows 10 pro with only 6TB of data filled and the speed usually rarely exceeds 10 Mbps.
    My problem is that I find it really waste of resources when there is more than 96% bandwidth and 3 HDDs are not used.
    My current configuration file is as below.
    How do I tweak the configuration to increase the speed and be able to expand to the next HDDs?

# use color in user interface
color: true

# server address of the api gateway and frontend app
console.address: 127.0.0.1:8002

# the public address of the node, useful for nodes behind NAT
contact.external-address: mydomain.com:28967

# Maximum Amount of Idle Database connections, -1 means the stdlib default
db.max_idle_conns: 5

# Maximum Amount of Open Database connections, -1 means the stdlib default
db.max_open_conns: 30

# address to listen on for debug endpoints
debug.addr: 127.0.0.1:8003

# expose control panel
debug.control: true

# in-memory buffer for uploads
filestore.write-buffer-size: 512.0 KiB

# path to the certificate chain for this identity
identity.cert-path: E:\Program\Storj\Identity\storagenode\identity.cert

# path to the private key for this identity
identity.key-path: E:\Program\Storj\Identity\storagenode\identity.key

# the minimum log level to log
log.level: info

# can be stdout, stderr, or a filename
log.output: winfile:///E:\Program\Storj\Storage Node\\storagenode.log

# maximum duration to wait before requesting data
nodestats.max-sleep: 1m0s

# operator email address
operator.email: myemail@gmail.com

# operator wallet address
operator.wallet: myaddress

# file preallocated for uploading
pieces.write-prealloc-size: 4.0 MiB

# whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly.
preflight.local-time-check: true

# how many concurrent retain requests can be processed at the same time.
retain.concurrency: 40

# public address to listen on
server.address: :28967

# private address to listen on
server.private-address: 127.0.0.1:7778

# total allocated bandwidth in bytes (deprecated)
storage.allocated-bandwidth: 90000 TB

# total allocated disk space in bytes
storage.allocated-disk-space: 7.20 TB

# path to store data in
storage.path: S:\Storj\

Don’t expect a single node to fill up 32TB nor will it ever use 300MBps Internet. Storj is used by real people. This isnt mining. The only way for 32TB to be filled is if you run many nodes on many different ips on different subnets.

2 Likes

As @deathlessdd has noted you will most likely never fill 32TB with a single node on a single IP. As long as there are no errors in the logs you just need to keep the node online and available.

Your node is a similar age to mine (05/2020). Like you I’ve also overprovisioned my node - 32GB RAM, 40TB available, Windows 10 Pro and 500Mbps but it’s used for other tasks and would be online anyway. You do seem to be losing a lot of races with almost 10% cancelled uploads, compared to my 0.31%.

image

1 Like

I wish I can see a daily average of 10mb/s bandwidth utilization! in the last few weeks I usually see 3mb/s 24hrs average and on some occations 4-4.5mbps max.

@s1248.com hasn’t mentioned a 10Mbps daily average. Although as that would only be around 100GB daily transfer (ingress and egress) it should be possible for the largest of nodes on a busy day.

1 Like

Your are right. Daily transfer only around 29 GB.

Something looks wrong on those ‘dip’ days.

My node has more used disk space…

image

Ya. I think so. Something wrong. Can you review my configuration?

The configuration is fine.

You should check the logs from the 3rd/4th and 7th/8th March. It looks as though your node wasn’t online for some time.

Can you accurately write down the powershell command for this result?
successrate.pls -Path …
Thanks

@coinbirds I find it works better if you take a copy of the storagenode.log file. The original log file is by default in the program installation folder. So my command is:

.\successrate.ps1 -Path "$env:ProgramFiles\Storj\Storage Node\storagenode - Copy.log"

I have the same dips, but my node was online all the time. Node started from feb 21 and only used 2.41TB.

This is a different graph, unrelated to the ones shown above.

The graph you’ve posted is notoriously slow to update as it requires the satellites to finish processing order data within set time limits, and often looks the way yours does. I ignore it except for the PB*h figure.

1 Like

Aha. This one might be better. I must admit that @s1248.com graph seems a bit odd. Especially when his node is almost a year older than mine.

As expected… a large node on a busy day… 105GB daily transfer