How to install 2 non Docker nodes on PC?

I want to install more than one node on pc, reason is that i want to make node per HDD.
If my hdd broken, then lost only one hdd data not all 5 tk.
I have i5 processod and 8gb ram, should be enoth to run, this is dedicated pc for node.
Instalator not give me install second node as software already instaled, is there some posibilities, exept docker.

If it is a dedicated PC for nodes, is there a reason not to use linux with docker?

this is currently not possible unless you run the node with CLI docker install

devs can allow multiinstll, it will solve problem.

I dont want use docker as i not have enouth knolege to mantain it, thats why i prefer windows

U could run second node in a VM.
Everything you need to know is online.
Never used vm’s before and its not that hard to set Up.

Fire up a VM and try it out before you start a mode so you get the feel for it

On the other hand everyrhing you need to know to start and run a docker node is written down and after your first node its very easy to run and setup more. Maintinence, well with watchtower its No maintenence att all

Windows is very Limited use case that is why you can only install one at a time linux you can install as many as you want too.

i tried to make second service on storegenode but 14002 and 7778 is hardcoded, it not use YAML configuration, i changed this port to 14003 in configuration and on start it try to start on 14002

You can change the ports fine nothing is hardcoded its all in the config file

I changed in config file, but it try to start on standard 14002

Works fine for me I even changed my ports to 15002

# how frequently bandwidth usage rollups are calculated
# bandwidth.interval: 1h0m0s

# how frequently expired pieces are collected
# collector.interval: 1h0m0s

# use color in user interface
# color: false

# server address of the api gateway and frontend app
# console.address:

# path to static resources
# console.static-dir: ""

# the public address of the node, useful for nodes behind NAT
contact.external-address: ""

# how frequently the node contact chore should run
# contact.interval: 1h0m0s

# Maximum Database Connection Lifetime, -1ns means the stdlib default
# db.conn_max_lifetime: -1ns

# Maximum Amount of Idle Database connections, -1 means the stdlib default
# db.max_idle_conns: 20

# Maximum Amount of Open Database connections, -1 means the stdlib default
# db.max_open_conns: 25

# address to listen on for debug endpoints
# debug.addr:

# If set, a path to write a process trace SVG to
# debug.trace-out: ""

# open config in default editor
# edit-conf: false

# path to the certificate chain for this identity
identity.cert-path: C:\Identity1\storagenode/identity.cert

# path to the private key for this identity
identity.key-path: C:\Identity1\storagenode/identity.key

# the public address of the Kademlia node, useful for nodes behind NAT

# operator email address

# operator wallet address
kademlia.operator.wallet: 0x0D53d36A422d3Dd841EBaC8508d839259bA0668f

# if true, log function filename and line number
# log.caller: false

# if true, set logging to development mode
# log.development: false

# configures log encoding. can either be 'console' or 'json'
# log.encoding: console

# the minimum log level to log
log.level: info

# can be stdout, stderr, or a filename
log.output: winfile:///C:\Program Files\Storj1\Storage Node\\storagenode.log

# if true, log stack traces
# log.stack: false

# address to send telemetry to
# metrics.addr:

# application name for telemetry identification
# storagenode.exe

# application suffix
# -release

# instance id prefix
# metrics.instance-prefix: ""

# how frequently to send up telemetry
# metrics.interval: 1m0s

# path to log for oom notices
# monkit.hw.oomlog: /var/log/kern.log

# maximum duration to wait before requesting data
# nodestats.max-sleep: 5m0s

# how often to sync reputation
# nodestats.reputation-sync: 4h0m0s

# how often to sync storage
# 12h0m0s

# operator email address ""

# operator wallet address
operator.wallet: ""

# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 40

# allows for small differences in the satellite and storagenode clocks
# retain.max-time-skew: 24h0m0s

# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: disabled

# public address to listen on
server.address: :12000

# log all GRPC traffic to zap logger
server.debug-log-traffic: false

# if true, client leaves may contain the most recent certificate revocation for the current certificate
# server.extensions.revocation: true

# if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
# server.extensions.whitelist-signed-leaf: false

# path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
# server.peer-ca-whitelist-path: ""

# identity version(s) the server will be allowed to talk to
# server.peer-id-versions: latest

# private address to listen on

# url for revocation database (e.g. bolt://some.db OR redis://
# server.revocation-dburl: bolt://C:\Program Files\Storj1\Storage Node/revocations.db

# if true, uses peer ca whitelist checking
# server.use-peer-ca-whitelist: true

# total allocated bandwidth in bytes
storage.allocated-bandwidth: 50.0 TB

# total allocated disk space in bytes
storage.allocated-disk-space: 1 TB

# how frequently Kademlia bucket should be refreshed with node stats
# storage.k-bucket-refresh-interval: 1h0m0s

# path to store data in
storage.path: E:\

# a comma-separated list of approved satellite node urls
# storage.whitelisted-satellites:,,,

# how often the space used cache is synced to persistent storage
# storage2.cache-sync-interval: 1h0m0s

# how soon before expiration date should things be considered expired
# storage2.expiration-grace-period: 48h0m0s

# how many concurrent requests are allowed, before uploads are rejected.
# storage2.max-concurrent-requests: 40

# how frequently Kademlia bucket should be refreshed with node stats
# storage2.monitor.interval: 1h0m0s

# how much bandwidth a node at minimum has to advertise
# storage2.monitor.minimum-bandwidth: 500.0 GB

# how much disk space a node at minimum has to advertise
# storage2.monitor.minimum-disk-space: 500.0 GB

# how long after OrderLimit creation date are OrderLimits no longer accepted
# storage2.order-limit-grace-period: 1h0m0s

# length of time to archive orders before deletion
# storage2.orders.archive-ttl: 168h0m0s

# duration between archive cleanups
# storage2.orders.cleanup-interval: 24h0m0s

# timeout for dialing satellite during sending orders
# storage2.orders.sender-dial-timeout: 1m0s

# duration between sending
# storage2.orders.sender-interval: 1h0m0s

# timeout for read/write operations during sending
# storage2.orders.sender-request-timeout: 1h0m0s

# timeout for sending
# storage2.orders.sender-timeout: 1h0m0s

# allows for small differences in the satellite and storagenode clocks
# storage2.retain-time-buffer: 1h0m0s

# Interval to check the version
# version.check-interval: 15m0s

# Request timeout for version checks
# version.request-timeout: 1m0s

# server address to check its version against
# version.server-address:

on start i get this

019-11-05T00:09:48.578+0200 FATAL Unrecoverable error {“error”: “listen tcp bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.”}

You need to remove the # from the config

ok thanks i will try it

OK i started fine, looks like working, but not online yet, need to wait some time.

Node is online and colecting data, As i know that i can make lot of windows nodes on pc, I will think to make as recomended HDD per node, now i have just 5 HDD in stripe, and if i will lose hdd, all data wil be lost, now i can change it. to more reliable configuration.
If someone want to know how to do it, can share how.

Dont start all five nodes att the same time.
Fill the first one to 75% and then start second node.
Same for the other nodes. The vetting would take a very Long time if you start two or more nodes at the same time

1 Like

why it takes long time?

A new node gets 25% of normal data. So 2 or more nodes on the same ip will get a small part each of THE 25% and the audits going to take forever on multiplex nodes. This is if you start multiple nodes at the same time