FreeBSD: "storagenode run" creates a new folder "storage" for data

Hello everyone,

I managed to edit the config. yaml file and dashboard showed ONLINE.
But, my dataset of 2TB migrated from the previous system isn’t recognized.
Instead of taking “storagenode1” and the files and blobs within, the command “storagenode run” creates a new folder “storage” in “storagenode1” which is quiet troublesome.

Ls -l shows in “storagenode1” root 99 or 99 99
In “storage” 501 99 or 99 99
Maybe the problem is there.

Will be nice to have some notes for FreeBSD, I know it’s coming, waiting for.

What does your docker start command look like?

No docker on FreeBSD. I used the binnary from

Please, show your storagenode run command

Hm, I just do "./storagenode run" as help suggest.
I can show you the full config.yaml, if it’s what you want.

It uses the default config location, if you didn’t specify the --config-dir.
So, it should read the config.yaml from the ~/.local/share/storj/storagenode.
Please, show the value for the storage.path: option:

grep "storage.path:" ~/.local/share/storj/storagenode/config.yaml

storage.path: /zroot/storj/storagenode1/ (I tried with “/” and without)
My config dir is the default one at /root/.local/share/storj/storagenode/config.yaml.
And the node goes online with identity, external adress etc…

My only problem is this : /zroot/storagenode1/storage. It’s created when the node is starting.

weird, I copied my config to the default location and run storagenode.
It doesn’t tried to create a storage inside my storage folder.
My parameter is /mnt/storj/storagenode4/storage though.

If I change it to the /mnt/storj/storagenode3 and run the storagenode it creates blobs and databases in the /mnt/storj/storagenode3

Are you sure that you do not use any scripts?

My version is

$ storagenode version
Release build
Version: v1.1.1
Build timestamp: 01 Apr 20 15:20 UTC
Git commit: 17923e6fd199e2b33a6ef5853a76f9be68322e79

Release build
Version: v1.1.1
Build timestamp: 01 Apr 20 17:20 CEST
Git commit: 17923e6fd199e2b33a6ef5853a76f9be68322e79

Indeed, it’s weird.
Did you try to use an already existing database? Like I’m.

Maybe I can rename “storagenode1” to “storage” and thats all?
I will try toumorrow.

I have tried with already existing database.

I think it create the /zroot/storagenode1/storage again.

Do you run the storagenode as root?

Please, show the first line after the run. I would like to know, from where it took the configuration.

ls -l /zroot/storj/storagenode1
ls -l /zroot/
ls -l /zroot/storagenode1

I think, you missed /storj/ between /zroot/ and /storagenode1/ somewhere in your /root/.local/share/storj/storagenode/config.yaml

Please, give me output of the command:

grep "/storagenode1" /root/.local/share/storj/storagenode/config.yaml


grep "/storagenode1" ~/.local/share/storj/storagenode/config.yaml


grep "/storagenode1" /home/99/.local/share/storj/storagenode/config.yaml

Thank you for your time Alexey.

Node operationnal on FreeBSD 12.1 with zfs stripe.

/zroot/storj/storagenode1/storage was the good data path. My bad.

You can delete the topic.

Thanks again.

A zfs stripe is a bad idea, it’s just like raid0. If one drive fails, you lose the data of both drives. Better to run one node on each drive.

I migrated one full 2TB disk to a stripe of 6 old aged, different sized and sectors, hdds plus a small ssd for log. Thats why a striped.

That makes it even worse. 6 striped drives. The propability of your node dying with 6 old drives is so high, I’d be surprised if it survives for more than a few month. A single hard-drive failure and everything is lost.


In 24h storj uploaded to me almost 500Gb of data with a 160Gb engress so right now I think it was a good move. I have two choices. Replace a failing drive with a similar one at the last moment. Or buying a large drive and doing rsync loops on it.

A full node is useless.

And even if the node fails, the customer data will not be lost since it’s storj. And I will not cry because it’s for fun.