Second Node on same device

Hi team,

I am trying to set up a second node on my device to have it vetted and move it to a second HDD.
However I seem to get an error when starting it up:

2023-11-26 21:47:54,879 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-11-26 21:47:54,879 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-11-26 21:47:54,879 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-11-26T21:47:55Z	INFO	db.migration	Database Version	{"process": "storagenode", "version": 54}
2023-11-26T21:47:57Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	trust	Scheduling next refresh	{"process": "storagenode", "after": "5h34m53.056819275s"}
2023-11-26T21:47:58Z	INFO	bandwidth	Performing bandwidth usage rollups	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	Node 1EveoMDJVKpdDYBA3WqgfHtd7dQeCfaYWPYUUMbMbFiCSjE4EL started	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	Public server started on [::]:28967	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	Private server started on 127.0.0.1:7778	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	pieces:trash	emptying trash started	{"process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-11-26T21:47:58Z	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.	{"process": "storagenode"}
2023-11-26T21:47:58Z	INFO	pieces:trash	emptying trash started	{"process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-11-26T21:47:58Z	INFO	pieces:trash	emptying trash started	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-11-26T21:47:58Z	WARN	piecestore:monitor	Disk space is less than requested. Allocated space is	{"process": "storagenode", "bytes": 497614712832}
2023-11-26T21:47:58Z	ERROR	piecestore:monitor	Total disk space is less than required minimum	{"process": "storagenode", "bytes": 500000000000}
2023-11-26T21:47:58Z	ERROR	services	unexpected shutdown of a runner	{"process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: disk space requirement not met", "errorVerbose": "piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:135\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-11-26T21:47:58Z	INFO	pieces:trash	emptying trash started	{"process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-11-26T21:47:58Z	ERROR	collector	error during collecting pieces: 	{"process": "storagenode", "error": "context canceled"}
2023-11-26T21:47:58Z	ERROR	piecestore:cache	error during init space usage db: 	{"process": "storagenode", "error": "piece space used: context canceled", "errorVerbose": "piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:73\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:81\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-11-26T21:47:58Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"process": "storagenode", "error": "context canceled"}
2023-11-26T21:47:58Z	ERROR	gracefulexit:blobscleaner	couldn't receive satellite's GE status	{"process": "storagenode", "error": "context canceled"}
2023-11-26T21:47:58Z	ERROR	gracefulexit:chore	error retrieving satellites.	{"process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:195\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:207\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

your disk has not enough free space, is it by chance an 500 gb disk / partition?
And hopefully not on the same drive as the first node, that would be violating TOS.

Why not using the new drive directly?

And welcome to the Forum!

2 Likes

THANK YOU!!! Such a quick fix :smiley:
Thanks for notifying about the TOS, will change that, did not know that - will need to free the second one up, and basically preparing for when the other disk is full.

And also thanks for the welcome :slight_smile:

No Problem, fellow Storage Node Operator.

Also watch Announcement: Changes to node payout rates as of December 1st, 2023 (Closed) - #3 by Bryanm

A 500 gb disk does not earn back the money for the Power its using.

2 Likes

Yeah I saw that. I am actually migrating as much as possible from chia. So I got 200+TB to be ideally brought to Storj. And I will delete chia-plots as I get allocations

Its important to use a new generated and signed Identity, or you will kill both nodes!

That I am aware, thanks! I did operate a storjnode 2 years ago, so I got some of the basics.

Appreciate all your help!!

To fill that up, it would be a long, long time…
Wish you luck!
Take a look about the recomended file systems!

ah, yes its late here…

1 Like

Oh I did not see anything about that. I have some NTFS and ext4 disks. Anything wrong with that?

No. ext4 are preferable under linux, NTFS under windows.

Note that the NTFS ones may need defragmentation soner (Ultradefrag with additional MFT defrag for best results) at some point.
This can be done while operating when full, or set to be full.

Also hosting Databases on SSD reduces IOPS.

Also there is an requirement from 1 CPU Core, and minimum 1 Disk per node.
More than one node per disk is nonsens and against TOS. (TOS are in the remaking process)

Also the Nodes incomming data will split for all nodes on the same IP.

With more node data, a caching solution may be handy, like primocache for windows or an linux equivalent.

watch your clustersizes! 4k (or 8k on drives >16TB) are ok. Storj has a lot of small files.
Don’t fill to the brim, let 10%free.

1 Like

NTFS works not good under Linux. If you want to use them under Linux it’s better to reformat them to ext4.

1 Like