My disk is now full

Change your allocated space in your storj config file and restart service

what path does include the config file?

install directory. use notepad++ to edit, but give it only +0.3TB, or you risk problems…

is there already 11.95 TB given?

something does not add up here. maybe the filewalk can’t finish…

It’s weirdly harder to find the config file on Windows then Linux

1 Like

This makes no sense at all…is this the same node?

yes, it is the same node. I don’t really remember why I chose only to use 12TB and not 13TB!

Because of 10% reserve?

is it here where i can change allocated space?

Should I do the change or it is better to leave it as it is?

It’s recommended to leave 10% unallocated for overhead…
Step 10:

Well, 10% is actually more necessary on smaller nodes than bigger nodes.
I actually keep 7% with a maximum of 500GB (for nodes >7TB). But I’ve even got nodes like yours, where the filewalker apparently sucks. On those even as few as 500GB has been left, without any noticable negative impact on the node itself thoug. Althoug, I’ve to start them with storage2.monitor.minimum-disk-space=100MB as a setting, since they won’t start otherwise.

Storj displays the space as TB (same as drive manufacturers) while the Windows displays it as TiB, despite the unit of size is displayed as TB.
12.7 TiB is roughly 14TB, so I would say you can go ahead and set the node to 13TB for now and change the settings slightly again once the node will be full and drive wont.

You can also get rid of the data from decommed satellites as in some cases not all of it was deleted as mentioned here: How To Forget Untrusted Satellites - Node Operators / FAQ - Storj Community Forum (official)

1 Like

Everything is correct. Windows shows space in binary units, our software uses SI (decimal) units.
But the difference between used space and average disk usage is not ok, likely the filewalker and/or the Garbage Collector didn’t finish their job.

@moudar Please search for FATAL errors in your logs, also errors related to the filewalker (search for ERROR or failed and walk) and the Garbage Collector (search for ERROR or failed and retain).
For Windows and PowerShell:

sls "fatal" "C:\Program Files\Storj\Storage Node\storagenode.log" | select -last 10
sls "walk" "C:\Program Files\Storj\Storage Node\storagenode.log" | sls "error|failed" | select -last 10
sls "retain" "C:\Program Files\Storj\Storage Node\storagenode.log" | sls "error|failed" | select -last 10
1 Like

It would be much simpler if the node code allowed for more than 1 mount point/directory to be used as data storage. Expanding a node then would become easy and enable a lot better utilization of disks.

Adding to the above what would then be just brilliant, would be if one mount point (disk) failed and the node just kept running on the remain mount point and remove the failed disk data from the node. Perhaps a future feature.

It’s a very bad idea to re-implement a bad RAID0 solution, where with the one disk failure the whole node is lost.
You may do so from your OS though, but the result will be the same.
So, it’s much better to run 2 nodes instead, each on own disk.

I think unique means the Code to adapt multiple mountpoints as multiple nodes,

So if a “second” data place is given, generating the corresponding “Sub-node-id” with its own Reputation.
Ability to move data including the identity to new mountpoints editing the path, or giving the command to do so would be a verry good way to make multi nodes maschines more manageable.

Or developing the Win GUI Storj Node Toolbox from vadim( i think.) to adapt to be more flexible.
(i can’t use it because of nostandard installation path of the first node)

If it has an own NodeID for the second disk, this will be exactly the second node.
And you may already to do so, generate a second identity, sign it with a new authorization token and run it with the second disk.
Then you may add both nodes to the multinode dashboard.

Yes, i agree. Edit: It was not clear enough.
But generaly an easyer adoption of expanding single nodes on windows pcs to multi nodes is maybe in the far away future more interesting, when more expanding is needed again.

I strongly disagree to have any simplifications in running of multiple nodes.
We need durable nodes, not a high nodes churn rate. Simplification always attract users who want Wizards - “next-next-done. Oops, doesn’t work something - destroy. next-next-done… does not work again and I do not care why, I have bunch of disks and want to put them into work after I failed with HDD-mining. Hey support - why is your software does not work?!”
See I would like to enquire: When will I receive my commission?


My Storry:

  1. node -no problem
  2. node on 2.pc -no problem
    I have the following hurdle:
    expanding 1pc to 2 nodes or moving nodes together? I have no idea.

So im not for lowering the entry difficulty, im for lowering the expanding options for existing nodes.

I also don’t want that. Edited it to make it clear.

Sho rabbit shooo :slight_smile: