Release preparation v1.136

I found only this.
Fault bucket 1508655261599125727, type 5
Event Name: RADAR_PRE_LEAK_64
Response: Not available
Cab Id: 0

Problem signature:
P1: storagenode.exe
P2: 1.136.4.0
P3: 10.0.19042.2.0.0
P4:
P5:
P6:
P7:
P8:
P9:
P10:

Attached files:
\?\C:\Users\Miner6\AppData\Local\Temp\RDR5FE0.tmp\empty.txt
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WER5FF1.tmp.WERInternalMetadata.xml
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WER6001.tmp.xml
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WER600C.tmp.csv
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WER601C.tmp.txt

These files may be available here:

Analysis symbol:
Rechecking for solution: 0
Report Id: 5fc9f771-e290-4088-af71-e58b0fcf5dc8
Report Status: 268435456
Hashed bucket: e911fe0fe2ca0c0c24efd1f79c23d0df
Cab Guid: 0

I cant find any of this files specified in this error report

Did you enable memtbl?

No, because this server has only 48GB of RAM

I shared the info with the team.

I downgraded to 1.135 3h ago, now waiting will there be some problems.

1 Like

1.135.5 is algo go down, without any log, so it is not related to version 1.136

1 Like

Try move the disk with a failing node to another server to isolate the problem further.

I dont see that any specific device is failing, it like happens with all of them.
If disk failing there is usually no response from hdd in logs, but there is no such. no error at all on the logs.

Could be interesting to move one of the disks to a USB to SATA adapter to see if it is the sata/sas disk controller or cables.

I was able to reproduce this by first creating a situation that would let the node crash and in addition make the log file read only. At that point the service will crash without creating any log entry.

You can still find out whats wrong by stopping the storage node service and try to start it in a cmd promt. In my case it looks like this:

C:\Users\jensh>"C:\Program Files\Storj\Storage Node\storagenode.exe" run --config-dir "C:\Program Files\Storj\Storage Node\\"
Error: open sink "winfile:///C:\\Program Files\\Storj\\Storage Node\\\\storagenode.log": open C:\Program Files\Storj\Storage Node\\storagenode.log: Access is denied.
Usage:
  storagenode run [flags]

Flags:
      --bandwidth.interval duration                              how frequently bandwidth usage cache should be synced with the db (default 1h0m0s)
      --collector.expiration-batch-size int                      how many expired pieces to delete in one batch. If <= 0, all expired pieces will be deleted in one batch. (ignored by flat file store) (default 1000)
      --collector.expiration-grace-period duration               how long should the collector wait before deleting expired pieces. Should not be less than 30 min since nodes are allowed to be 30 mins out of sync with the satellite. (default 1h0m0s)
      --collector.flat-file-batch-limit int                      how many per hour flat files can be deleted in one batch. (default 5)
      --collector.interval duration                              how frequently expired pieces are collected (default 1h0m0s)
      --collector.reverse-order                                  delete expired pieces in reverse order (recently expired first)
      --console.address string                                   server address of the api gateway and frontend app (default "127.0.0.1:14002")
      --console.static-dir string                                path to static resources
      --contact.check-in-timeout duration                        timeout for the check-in request (default 10m0s)
      --contact.external-address string                          the public address of the node, useful for nodes behind NAT
      --contact.interval duration                                how frequently the node contact chore should run (default 1h0m0s)
      --contact.self-signed-tags strings                         coma separated key=value pairs, which will be self signed and used as tags
      --contact.tags signedtags                                  protobuf serialized signed node tags in hex (base64) format
      --debug.addr string                                        address to listen on for debug endpoints (default "127.0.0.1:0")
      --edit-conf                                                open config in default editor
      --filestore.force-sync                                     if true, force disk synchronization and atomic writes
      --filestore.write-buffer-size memory.Size                  in-memory buffer for uploads (default 128.0 KiB)
      --forget-satellite.chore-interval duration                 how often to run the chore to check for satellites for the node to forget (default 1m0s)
      --forget-satellite.num-workers int                         number of workers to handle forget satellite (default 1)
      --graceful-exit.chore-interval duration                    how often to run the chore to check for satellites for the node to exit. (default 1m0s)
      --graceful-exit.min-bytes-per-second memory.Size           the minimum acceptable bytes that an exiting node can transfer per second to the new node (default 5.00 KB)
      --graceful-exit.min-download-timeout duration              the minimum duration for downloading a piece from storage nodes before timing out (default 2m0s)
      --graceful-exit.num-concurrent-transfers int               number of concurrent transfers per graceful exit worker (default 5)
      --graceful-exit.num-workers int                            number of workers to handle satellite exits (default 4)
      --hashstore.compaction.alive-fraction float                if the log file is not this alive, compact it (default 0.25)
      --hashstore.compaction.delete-trash-immediately            if set, deletes all trash immediately instead of after the ttl
      --hashstore.compaction.expires-days uint                   number of days to keep trash records around (default 7)
      --hashstore.compaction.max-log-size uint                   max size of a log file (default 1073741824)
      --hashstore.compaction.ordered-rewrite                     controls if we collect records and sort them and rewrite them before the hashtbl (default true)
      --hashstore.compaction.probability-power float             power to raise the rewrite probability to. >1 means must be closer to the alive fraction to be compacted, <1 means the opposite (default 2)
      --hashstore.compaction.rewrite-multiple float              multiple of the hashtbl to rewrite in a single compaction (default 10)
      --hashstore.hashtbl.mlock                                  if set, call mlock on any mmap/mremap'd data (default true)
      --hashstore.hashtbl.mmap                                   if set, uses mmap to do reads
      --hashstore.logs-path string                               path to store log files in (by default, it's relative to the storage directory)' (default "hashstore")
      --hashstore.memtbl.mlock                                   if set, call mlock on any mmap/mremap'd data (default true)
      --hashstore.memtbl.mmap                                    if set, uses mmap to do reads
      --hashstore.store.flush-semaphore int                      controls the number of concurrent flushes to log files
      --hashstore.store.sync-writes                              if set, writes to the log file and table are fsync'd to disk
      --hashstore.sync-lifo                                      controls if waiters are processed in LIFO or FIFO order.
      --hashstore.table-default-kind TableKind                   default table kind to use (hashtbl or memtbl) during NEW compations (default HashTbl)
      --hashstore.table-path string                              path to store tables in. Can be same as LogsPath, as subdirectories are used (by default, it's relative to the storage directory) (default "hashstore")
      --healthcheck.details                                      Enable additional details about the satellite connections via the HTTP healthcheck.
      --healthcheck.enabled                                      Provide health endpoint (including suspension/audit failures) on main public port, but HTTP protocol. (default true)
  -h, --help                                                     help for run
      --identity.cert-path string                                path to the certificate chain for this identity (default "C:\\Users\\jensh\\AppData\\Roaming\\Storj\\Identity\\Storagenode\\identity.cert")
      --identity.key-path string                                 path to the private key for this identity (default "C:\\Users\\jensh\\AppData\\Roaming\\Storj\\Identity\\Storagenode\\identity.key")
      --nodestats.max-sleep duration                             maximum duration to wait before requesting data (default 5m0s)
      --nodestats.storage-sync duration                          how often to sync storage (default 12h0m0s)
      --operator.email string                                    operator email address
      --operator.wallet string                                   operator wallet address
      --operator.wallet-features wallet-features                 operator wallet features
      --pieces.delete-to-trash                                   move pieces to trash upon deletion. Warning: if set to false, you risk disqualification for failed audits if a satellite database is restored from backup. (default true)
      --pieces.enable-flat-expiration-store                      use flat files for the piece expiration store instead of a sqlite database (default true)
      --pieces.enable-lazy-filewalker                            run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority (default true)
      --pieces.file-stat-cache string                            optional type of file stat cache. Might be useful for slow disk and limited memory. Available options: badger (EXPERIMENTAL)
      --pieces.flat-expiration-include-sq-lite                   use and remove piece expirations from the sqlite database _also_ when the flat expiration store is enabled (default true)
      --pieces.flat-expiration-store-file-handles int            number of concurrent file handles to use for the flat expiration store (default 1000)
      --pieces.flat-expiration-store-max-buffer-time duration    maximum time to buffer writes to the flat expiration store before flushing (default 5m0s)
      --pieces.flat-expiration-store-path string                 where to store flat piece expiration files, relative to the data directory (default "piece_expirations")
      --pieces.trash-chore-interval duration                     how often to empty check the trash, and delete old files (default 24h0m0s)
      --pieces.write-prealloc-size memory.Size                   deprecated (default 4.0 MiB)
      --preflight.database-check                                 whether or not preflight check for database is enabled. (default true)
      --preflight.local-time-check                               whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly. (default true)
      --reputation.cache                                         store reputation stats in cache (default true)
      --reputation.interval duration                             how often to sync reputation (default 4h0m0s)
      --reputation.max-sleep duration                            maximum duration to wait before requesting data (default 5m0s)
      --retain.cache-path string                                 path to the cache directory for retain requests. (default "C:\\Program Files\\Storj\\Storage Node/retain")
      --retain.concurrency int                                   how many concurrent retain requests can be processed at the same time. (default 1)
      --retain.max-time-skew duration                            allows for small differences in the satellite and storagenode clocks (default 72h0m0s)
      --retain.status storj.Status                               allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug) (default enabled)
      --server.address string                                    public address to listen on (default ":7777")
      --server.debouncing-enabled                                whether to debounce incoming messages (default true)
      --server.extensions.revocation                             if true, client leaves may contain the most recent certificate revocation for the current certificate (default true)
      --server.extensions.whitelist-signed-leaf                  if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
      --server.peer-ca-whitelist-path string                     path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
      --server.peer-id-versions string                           identity version(s) the server will be allowed to talk to (default "latest")
      --server.private-address string                            private address to listen on (default "127.0.0.1:7778")
      --server.revocation-dburl string                           url for revocation database (e.g. bolt://some.db OR redis://127.0.0.1:6378?db=2&password=abc123) (default "bolt://C:\\Program Files\\Storj\\Storage Node/revocations.db")
      --server.tcp-fast-open                                     enable support for tcp fast open (default true)
      --server.tcp-fast-open-queue int                           the size of the tcp fast open queue (default 256)
      --server.use-peer-ca-whitelist                             if true, uses peer ca whitelist checking (default true)
      --storage.allocated-disk-space memory.Size                 total allocated disk space in bytes (default 1.00 TB)
      --storage.path string                                      path to store data in (default "C:\\Program Files\\Storj\\Storage Node/storage")
      --storage2.cache-sync-interval duration                    how often the space used cache is synced to persistent storage (default 1h0m0s)
      --storage2.database-dir string                             directory to store databases. if empty, uses data path
      --storage2.expiration-grace-period duration                how soon before expiration date should things be considered expired (default 48h0m0s)
      --storage2.max-concurrent-requests int                     how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
      --storage2.max-used-serials-size memory.Size               amount of memory allowed for used serials store - once surpassed, serials will be dropped at random (default 1.00 MB)
      --storage2.min-upload-speed memory.Size                    a client upload speed should not be lower than MinUploadSpeed in bytes-per-second (E.g: 1Mb), otherwise, it will be flagged as slow-connection and potentially be closed (default 0 B)
      --storage2.min-upload-speed-congestion-threshold float     if the portion defined by the total number of alive connection per MaxConcurrentRequest reaches this threshold, a slow upload client will no longer be monitored and flagged (default 0.8)
      --storage2.min-upload-speed-grace-duration duration        if MinUploadSpeed is configured, after a period of time after the client initiated the upload, the server will flag unusually slow upload client (default 10s)
      --storage2.monitor.interval duration                       how frequently to report storage stats to the satellite (default 1h0m0s)
      --storage2.monitor.minimum-bandwidth memory.Size           how much bandwidth a node at minimum has to advertise (deprecated) (default 0 B)
      --storage2.monitor.minimum-disk-space memory.Size          how much disk space a node at minimum has to advertise (default 500.00 GB)
      --storage2.monitor.verify-dir-readable-interval duration   how frequently to verify the location and readability of the storage directory (default 1m0s)
      --storage2.monitor.verify-dir-readable-timeout duration    how long to wait for a storage directory readability verification to complete (default 1m0s)
      --storage2.monitor.verify-dir-warn-only                    if the storage directory verification check fails, log a warning instead of killing the node
      --storage2.monitor.verify-dir-writable-interval duration   how frequently to verify writability of storage directory (default 5m0s)
      --storage2.monitor.verify-dir-writable-timeout duration    how long to wait for a storage directory writability verification to complete (default 1m0s)
      --storage2.order-limit-grace-period duration               how long after OrderLimit creation date are OrderLimits no longer accepted (default 1h0m0s)
      --storage2.orders.archive-ttl duration                     length of time to archive orders before deletion (default 168h0m0s)
      --storage2.orders.cleanup-interval duration                duration between archive cleanups (default 5m0s)
      --storage2.orders.max-sleep duration                       maximum duration to wait before trying to send orders (default 30s)
      --storage2.orders.path string                              path to store order limit files in (default "C:\\Program Files\\Storj\\Storage Node/orders")
      --storage2.orders.sender-dial-timeout duration             timeout for dialing satellite during sending orders (default 1m0s)
      --storage2.orders.sender-interval duration                 duration between sending (default 1h0m0s)
      --storage2.orders.sender-timeout duration                  timeout for sending (default 1h0m0s)
      --storage2.piece-scan-on-startup                           if set to true, all pieces disk usage is recalculated on startup (default true)
      --storage2.stream-operation-timeout duration               how long to spend waiting for a stream operation before canceling (default 30m0s)
      --storage2.trust.cache-path string                         file path where trust lists should be cached (default "C:\\Program Files\\Storj\\Storage Node/trust-cache.json")
      --storage2.trust.exclusions trust-exclusions               list of trust exclusions
      --storage2.trust.refresh-interval duration                 how often the trust pool should be refreshed (default 6h0m0s)
      --storage2.trust.sources trust-sources                     list of trust sources (default https://static.storj.io/dcs-satellites)
      --storage2migration.buffer-size int                        how many pieces to buffer (default 1)
      --storage2migration.delay duration                         constant delay between migration of two pieces. 0 means no delay
      --storage2migration.delete-expired                         whether to also delete expired pieces; has no effect if expired are migrated (default true)
      --storage2migration.interval duration                      how long to wait between pooling satellites for active migration (default 10m0s)
      --storage2migration.jitter                                 whether to add jitter to the delay; has no effect if delay is 0 (default true)
      --storage2migration.migrate-expired                        whether to also migrate expired pieces (default true)
      --storage2migration.migrate-regardless                     whether to also migrate pieces for satellites outside currently set
      --storage2migration.suppress-central-migration             if true, whether to suppress central control of migration initiation
      --version.check-interval duration                          Interval to check the version (default 15m0s)
      --version.request-timeout duration                         Request timeout for version checks (default 1m0s)
      --version.run-mode run-mode                                Define the run mode for the version checker. Options (once,periodic,disable) (default periodic)
      --version.server-address string                            server address to check its version against (default "https://version.storj.io")

Global Flags:
      --color                            use color in user interface
      --config-dir string                main directory for storagenode configuration (default "C:\\Program Files\\Storj\\Storage Node\\")
      --db.conn_max_lifetime duration    Maximum Database Connection Lifetime, -1ns means the stdlib default (default 30m0s)
      --db.max_idle_conns int            Maximum Amount of Idle Database connections, -1 means the stdlib default (default 1)
      --db.max_open_conns int            Maximum Amount of Open Database connections, -1 means the stdlib default (default 5)
      --debug.trace-out string           If set, a path to write a process trace SVG to
      --defaults string                  determines which set of configuration defaults to use. can either be 'dev' or 'release' (default "release")
      --identity-dir string              main directory for storagenode identity credentials (default "C:\\Users\\jensh\\AppData\\Roaming\\Storj\\Identity\\Storagenode")
      --log.caller                       if true, log function filename and line number
      --log.custom-level string          custom level overrides for specific loggers in the format NAME1=ERROR,NAME2=WARN,... Only level increment is supported, and only for selected loggers!
      --log.development                  if true, set logging to development mode
      --log.encoding string              configures log encoding. can either be 'console', 'json', 'pretty', or 'gcloudlogging'.
      --log.level Level                  the minimum log level to log (default info)
      --log.output string                can be stdout, stderr, or a filename (default "stderr")
      --log.stack                        if true, log stack traces
      --metrics.addr string              address(es) to send telemetry to (comma-separated) (default "collectora.storj.io:9000")
      --metrics.app string               application name for telemetry identification. Ignored for certain applications. (default "storagenode.exe")
      --metrics.app-suffix string        application suffix. Ignored for certain applications. (default "-release")
      --metrics.event-addr string        address(es) to send telemetry to (comma-separated IP:port or complex BQ definition, like bigquery:app=...,project=...,dataset=..., depends on the config/usage) (default "eventkitd.datasci.storj.io:9002")
      --metrics.event-queue int          size of the internal eventkit queue for UDP sending (default 10000)
      --metrics.instance-prefix string   instance id prefix
      --metrics.interval duration        how frequently to send up telemetry. Ignored for certain applications. (default 1m0s)
      --monkit.hw.oomlog string          path to log for oom notices (default "/var/log/kern.log")
      --tracing.agent-addr string        address for jaeger agent (default "agent.tracing.datasci.storj.io:5775")
      --tracing.app string               application name for tracing identification (default "storagenode.exe")
      --tracing.app-suffix string        application suffix (default "-release")
      --tracing.buffer-size int          buffer size for collector batch packet size
      --tracing.enabled                  whether tracing collector is enabled (default true)
      --tracing.host-regex string        the possible hostnames that trace-host designated traces can be sent to (default "\\.storj\\.tools:[0-9]+$")
      --tracing.interval duration        how frequently to flush traces to tracing agent (default 0s)
      --tracing.queue-size int           buffer size for collector queue size
      --tracing.sample float             how frequent to sample traces
1 Like

How it will help? My nodes are working, but turn itself off 2-4 times a day without log, why it turns off. when i start it starts OK

cmd would still contain the error even if it can’t be written into the log for what ever reason.

Ok I will try to put one of the nodes like this

After node gone off, cmd just closed.

then please try it run like this:

cmd.exe /c "C:\Program Files\Storj\Storage Node\storagenode.exe" run --config-dir "C:\Program Files\Storj\Storage Node\\"

or run a second cmd.exe in the first one, then execute the run command. This is for the case when the second cmd.exe will be closed, but you will still have all the output in the first one.

Now i somehow got the error, loks like system go out of RAM, server has 48GB, i havent seen in task manager that node consumed over 150MB but i see thna overall is consumed about 45GB when nodes go off, it drops

2025-09-09T16:37:26+03:00 ERROR failure during run {error: Failed to create storage node peer: hashstore: read H:\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-0000000000000013: Insufficient system resources exist to complete the requested service.\n\tstorj.io/storj/storagenode/hashstore.(*roPageCache).ReadRecord:668\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).ComputeEstimates:300\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:177\n\tstorj.io/storj/storagenode/hashstore.OpenTable:146\n\tstorj.io/storj/storagenode/hashstore.NewStore:276\n\tstorj.io/storj/storagenode/hashstore.New:93\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:618\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130, errorVerbose: Failed to create storage node peer: hashstore: read H:\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-0000000000000013: Insufficient system resources exist to complete the requested service.\n\tstorj.io/storj/storagenode/hashstore.(*roPageCache).ReadRecord:668\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).ComputeEstimates:300\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:177\n\tstorj.io/storj/storagenode/hashstore.OpenTable:146\n\tstorj.io/storj/storagenode/hashstore.NewStore:276\n\tstorj.io/storj/storagenode/hashstore.New:93\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:618\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130\n\tmain.cmdRun:86\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130}
2025-09-09T16:37:26+03:00 FATAL Unrecoverable error {error: Failed to create storage node peer: hashstore: read H:\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-0000000000000013: Insufficient system resources exist to complete the requested service.\n\tstorj.io/storj/storagenode/hashstore.(*roPageCache).ReadRecord:668\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).ComputeEstimates:300\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:177\n\tstorj.io/storj/storagenode/hashstore.OpenTable:146\n\tstorj.io/storj/storagenode/hashstore.NewStore:276\n\tstorj.io/storj/storagenode/hashstore.New:93\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:618\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130, errorVerbose: Failed to create storage node peer: hashstore: read H:\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-0000000000000013: Insufficient system resources exist to complete the requested service.\n\tstorj.io/storj/storagenode/hashstore.(*roPageCache).ReadRecord:668\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).ComputeEstimates:300\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:177\n\tstorj.io/storj/storagenode/hashstore.OpenTable:146\n\tstorj.io/storj/storagenode/hashstore.NewStore:276\n\tstorj.io/storj/storagenode/hashstore.New:93\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:618\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130\n\tmain.cmdRun:86\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.2:388\n\tstorj.io/common/process.cleanup.func1:406\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).add.func1:130}

Did you integrated some Miner to storagenode?

cpu is I7-12700k 128GB RAM
Iti is not that server that gives problems now, this yet works OK
server run only Storj with 17 nodes

For my windows server systems memory usage with hashstore looking strange as well. Almost all physical memory in use and huge pagefile but nodes still running. This is a big difference compared to my linux systems where everything looking nice.

do you use memtbl or standart hashstore?