Expand storage on docker

Hi.
I have tested storj node with docker within a debian container with nested virtualization on proxmox.

Now I want to expand the storage space from 1Tb to 4Tb.
I need to destroy the docker instance and create a new one? but how to preserve the existent storage content?

Thank you.

If you have used mountpoints in your docker run command (as instructed in the HOW_TO) then that data exists outside of the docker container and therefore doesn’t get destroyed if you remove the container.
You could post your docker run command, just to be sure

I just did this on a Ubuntu node. Here is what you need to do.

You need to stop the docker container for SNO.
Remove the current container.
Recreate the container with the updated space.

I can go into details with the docker commands if you need me to, but removing the docker does not touch the data. But you need to make sure that you use the same mount points that you currently use in your docker command.

If you post your docker run command here we can help. Just edit out your email and wallet.

docker run -d --restart unless-stopped -p 28967:28967
-p 14002:14002
-e WALLET=“MY_WALLET_ADDRESS”
-e EMAIL=“my@email.com”
-e ADDRESS=“xxx.xxx.xxx.xxx:28967”
-e BANDWIDTH=“40TB”
-e STORAGE=“800GB”
–mount type=bind,source="/root/.local/share/storj/identity/storagenode",destination=/app/identity
–mount type=bind,source="/media/1tb/",destination=/app/config
–name storagenode storjlabs/storagenode:beta

The mount point is a second .raw disk image, that i will move with the proxmox storage management to the new 4tb storage, after i will expand directly from proxmox.
Therefore i will not loose the mount point or the storage.

1 Like

docker run -d --restart unless-stopped -p 28967:28967
-p 14002:14002
-e WALLET=“MY_WALLET_ADDRESS”
-e EMAIL=“my@email.com”
-e ADDRESS=“xxx.xxx.xxx.xxx:28967”
-e BANDWIDTH=“40TB”
-e STORAGE=“4TB”
–mount type=bind,source="/root/.local/share/storj/identity/storagenode",destination=/app/identity
–mount type=bind,source="/media/1tb/",destination=/app/config
–name storagenode storjlabs/storagenode:beta

That should be all you need to change it to. So, here are the commands you need.

docker stop -t 300 storagenode
docker rm storagenode
Then use the above run command, with your wallet and email address in there and you should be good to go.

1 Like

unless /media/1tb indeed only offers 1tb of space :smiley:

1 Like

no no , is only the mountmoint folder name :slight_smile:

You might backup the raw disk before expanding it. I’ve done the same in ESXi, but decided to create a new vmdk and just used rsync to replicate the data as I did not want to take that level of risk.

Good point there. :slight_smile:

What size is your hard drive if its 4tb drive you should allow for 10% and a 4TB drive isnt a true 4TB drive more like 3.7 TB so you need to take in account for this as well.

2 Likes

With proxmox i never experienced problems with live storage expand with containers.

Yes I know. thank you.

@deathlessdd Does this continue as you go up in size? So you’d be reserving 1 TB for every 10 TB? I’ve setup a 10TB (~9 TB formatted) disk for SNO and have reserved 1TB of space on the disk. Seems like a lot, especially when my next jump would be to 15 or 20 TB.

No I wouldnt go that far but I would think you should give yourself some room for when the drive fills up so that the database doesnt corrupt if the drive fills up.

1 Like

Nor have I doing it with ESXi, but I backup my VMs nightly, and don’t mind risking it with anything that is backed up. However we can’t really backup SNO, as there is too many changes. But if the risk is acceptable to you go for it. Its just not acceptable for me.

Yes I know this requirement. My test was on 1tb with 200gb free.

I try to keep at least 300gigs to 400gigs free for my nodes though im not using only 1TB drives so it may vary, You never know if what may happen I just like to be safe rather then chancing on losing a node for this.

1 Like

I guess 10% is a safe number for 500GB drives. I’d love to know what people are reserving for larger SNOs as I really don’t think we should need to reserve more than ~200 GB. Either way not really an issue at the moment.

Anyone run the node behind an IPS?

I expect there are lots of people doing that. Maybe start a new topic if you have questions, but I run pfSense with Suricata. I even have regional blocking setup to block some of the Asia networks from connecting. So far no issue.

1 Like