New node. Worked a few hours and now "no such file or directory"

Not clear what happened.
Using ubuntu with docker.
I ran a docker logs command. Gives me the below error.

I decided to try deleting everything in my datadir and start over.
even tried uninstalling docker entirely.

My setup command works fine.
Start re-creates all the files.

And then docker gets back into a reboot loop with the below error.
Very confused. It was working for about 2 hours before I started hitting my head against this wall.

ERROR failure during run {“Process”: “storagenode”, “error”: “Error opening database on storagenode: database: used_serial opening file "config/storage/used_serial.db" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:364\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:341\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:316\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:281\n\tmain.cmdRun:65\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267”, “errorVerbose”: “Error opening database on storagenode: database: used_serial opening file "config/storage/used_serial.db" failed: unable to open database file: no such file or directory\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:364\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:341\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:316\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:281\n\tmain.cmdRun:65\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267”}
Error: Error opening database on storagenode: database: used_serial opening file “config/storage/used_serial.db” failed: unable to open database file: no such file or directory

How is your storage connected? What is your docker run command?

Just to be perfectly clear: when you say “start over” you deleted everything, created a new identity, signed it, ran the --SETUP docker command and recreated a whole new node?

Update:

I went into the storage dir yesterday to better understand the file structure being generated. I suspect i broke permissions on that dir in the process.

That drive won’t let me modify files.
I tried a few chmod commands but didn’t help.

Decided to cut my losses on that and start using a different hard drive. I am now back up and running.

I just won’t poke the data again.

(I was looking at files because i wanted to know if i could use mergerfs to span my node to multiple drives… which it looks like i can.)

This way madness lies… :wink:

3 Likes

Haha. You saying i shouldn’t do mergerfs?
Seems cleaner than doing multiple nodes.

There are loads of discussions on the forum about the merits of RAID, spanning nodes to multiple disks, etc.
You’ll lose your whole node if any of the disks in the mergerfs pool die and running two or more nodes on the same machine is actually fairly trivial and reduces the load of any one individual node (assuming all your nodes are behind the same IP).
I believe the “official” position is one-node-per-disk.
Also, I am a simple persona nd I need simple setups for my simple brain too cope :wink:

2 Likes

Good to know. Is there a fairly clear process document somewhere for how to add a second node to existing machine?

I have an epyc 128t server with over 100 drives.
So i really don’t want to make a mess and hurt my simple brain.

You’re running your nodes on Docker, yes?
Essentially you just need a new mount point for the new node, create and authenticate a new identity and (and this is important) start the new node with different ports.

My simple mind made me choose ports at my home ranging sequentially from 28901 to 28910 (for Storj) and 14001 to 14010 (for web interface) for nodes 1 to 10 respectively.
You need to add the port forwards on your router as well, which is trivial as they all map to the same IP address.

Starting the nodes with different ports is also trivial: on the docker run command you just need to change the -p values to something like -p 28961:28967/tcp -p 28961:28967/udp, etc

Did any of that make sense?

3 Likes

Yea i think so.
Basically follow the same exact process from scratch except skip the docker and storj install steps.
And unique ports.

Are you storing the identities in your storage dir? I saw notes about that in the install procedure. Sounds like that would Be key for this to make it easier.

Yes, so I have mount point called “nodeX” for each node and I plonk the identity in a /nodeX/identity directory. ANYthing to do with a particular node goes in a subdirectory of the respective node’s mount point. Keeps it easy.
I then name the docker instance nodeX as well, so everything has the same name (I am easily confused). :slight_smile:

(Incidentally, I keep the sequential numbering of the node names even across different machines in the same physical location. Just in order to, you guessed it, keep it simple) :smiley:

EDIT: It occurred to me that there may be some much more elegant way of doing this, but it seems to work for me so I just wanted to share. Feel free to ask me for some pointers if you’re struggling :smiley:

1 Like

I think I’m doing my ports wrong.

Appreciate your eyes on this. (wallet/email/dns modified for privacy)

docker run -d --restart unless-stopped --stop-timeout 300
-p 28923:28923/tcp
-p 28923:28923/udp
-p 14023:14023
-e WALLET=“0x###”
-e EMAIL="fake@email.com"
-e ADDRESS=“fake.ddns.net:28923
-e STORAGE=“2TB” \

OK, I think it should be

-p 28923:28967/tcp
-p 28923:28967/udp
-p 14023:14002

Can you give that a try and see if it works? :slight_smile:

1 Like

What about this part:
-e ADDRESS=“fake.ddns.net:28923

That’s OK.
Essentially what that line does is tell the satellites to connect to your server on that public port, which is what you want :slight_smile:

ok so now I have:
docker run -d --restart unless-stopped --stop-timeout 300
-p 28923:28967/tcp
-p 28923:28967/udp
-p 14023:14002
-e WALLET=“0x”
-e EMAIL=“fake”
-e ADDRESS=“fake.ddns.net:28923
-e STORAGE=“2TB” \

Dashboard loads. Says I’m online.
But a new issue I’ve hit (which wasn’t an issue before changing to custom ports)
My dashboard says QUIC Misconfigured.
“QUIC is misconfigured. You must forward port 28967 for both TCP and UDP to enable QUIC.”

I used to have my router forwarding 28967.
But I changed this to forward the range of 28900 to 28999.

Do I only need to forward the 1 port? (28967) and none of these new ports?

Are you forwarding both TCP and UDP connections on your router?

Sometimes the QUIC health check takes a couple of minutes, I’ve noticed…

yes both tcp and udp.
So I’m right I need to forward all of these?

Well, you only need to forward the ports for the nodes you’ve got on your internal network. It won’t cause any harm to forward all of those, though.

yea ok.
Just planning ahead for storj node 1 - 99. lol
stopped the docker and started again and seems to be ok now. (yes I did that the previous time too. Smile and nod. not going to try to break my head)

Thanks so much.

Lastly, I notice my docker instance is called “Storagenode”
is that something I can name? So when I have a billion instances going I’ll know which is which?
in my docker start command I don’t see a name defined… so little confused about that.