First I want to add another folder on a different drive. I understand I have to mount(on Linux) my external drive, but how to change the STORAGE parameter so that I can include multiple paths to folders.
Second, I will have to restart the node with different parameters so I can include another folder. But how do I this? When I type docker start , I don’t see more options for changing or defining parameters.
Third, if I can do the above 2, how do I reduce the amount of space available without a penalty? I know that if I delete the files, I will not get paid as much because I need to pay a fee. I was asking that assuming I can restart a node with different param, the node will upload the files to different nodes and I will not need as much disk space, without having to pay a fee. If this is not currently a feature, I would like to recommend that this become a feature.
The reason why is that you guys repeatedly recommend not purchasing new equipment for Storj. I was planning on using my old laptop and external drives for Storj, but these are also in use for file backup/storage. Burstcoin was exactly what I wanted(I could plot multiple plots and slowly delete them as I used up the storage), but its difficulty went up. If you recommend not buying more hardware, not having the ability to shrink node disk space will go directly against what you recommend, since deleting and restarting a new node means that I can only allocate more space per month, and I will never get my held revenue returned.
Overall I have around 3.4 gb already used up in around 24 hr(much faster than v2) and am very happy with my experience, except that Window 10 home is not supported. Keep up the great work and I am looking forward to more features and beta to come out.
You can’t include more than one folder. The only solution right now would be creating a second storage node for the external drive or use something like JBOD. I would recommend a second storage node because in case of a hard drive failure only one storage node gets disqualified.
You can reduce the allocated space. Your node will not accept new data and slowly delete old data. There is no penalty for it.
Never ever delete files on your drive. If you delete a single piece you risk getting disqualified for it. Our system is able to detect that and the penalty is high. If your node is getting disqualified you lose everything.
Thanks for the response. However you never answered about how to restart a node with new parameters. Do I use the start node command but replace start with restart?
Second, to my knowledge, I can’t start 2 nodes with one authentication key(according to node setup instructions on GitHub) so how do I start 2 nodes with 1 key?
I can’t answer your first question because I am not a docker expert. I will leave that question open for someone else.
Your second question: You can use the auth token only once. You will need a second auth token. In the past we have blocked these requests because the satellite was not ready for 2 nodes on the same IP. With the last release that was fixed. As a result I would expect that we send out as many auth tokens as you want. This process might take a bit more time because we haven’t updated the invite process. For now please sign up on the waitlist and wait for a new auth token.
Don’t spin up two storage nodes with the same identity. We have seen one storage node operator doing it. The satellite will send all request to the storage node it was last contacted. Upload a file to the first storage node, upload a file to the second storage node and how about sending an audit request for the first upload to the wrong storage node? Both storage nodes will get disqualified quickly.
You can definitely use JBOD and have the storagenode folder on that volume. I’m not entirely sure what instructions you are looking for. Setting up JBOD is kind of out of scope of the node setup.
You can also use LVM on linux or storage spaces on windows. I suggest just googling for instructions on any of those if you want to try it.
No. You will not lose anything, if you didn’t mistake in your mapping and/or mountpoints.
For example, if you uses Windows and uses the -v option for your mapping. The Docker for Windows could not map your disk to the container, but if you uses -v option, it could create an empty overlay disk inside the container instead of your disk. In this case when you remove the container, it will be removed with customers’ data. To avoid this, you must update your docker run command to use a --mount option instead of -v. Please, consult the instruction how it should looks like.
Currently I allocate 840 gb I my 1 and only internal drive. I also have 700 gb free on an EXTERNAL drive. My question is that can I configure a Ubuntu folder so that it displays the folder with partitions on both drives so I can dedicate 1540 gb to the network and can write 1540 gb to folder.
I’m not aware of anything that does this on a folder level. You can use LVM to do something like that, but I don’t recommend combining external and internal drives. With your setup I would just start with the internal drive and perhaps spin up another node for the external drive when that is eventually allowed and your first node has filled up.
I would not recommend to use an external hard drive at all.
They going offline very often because of not enough power or overheating of USB controller, also they more often very slow.
Thank you for the link. I will take a look to see if I will take the risk. However, I am planning on building a multi drive system (6 drives) for personal use, however you mention just waiting for the option to add more than 1 node. However, do you know when this will happen? Applying for 6 authentication keys with 6 emails takes forever to setup, so do you know when being able to set up more than 1 node will come out as a feature?
You can subscribe to the wait list with a different email addresses to receive a multiple authorization tokens. Then you will generate the same amount of identities and sign them with authorization tokens from the invite.
I should warn you about downsides of running multiple nodes behind the one public IP address:
Your nodes will be treated as a one node for uploads or downloads. So you will not receive more data than only one node.
For audits they will be treated as a separate nodes, so your channel and hardware could be saturated with multiple audit requests, so each of them can have more failed audits than if were only one node.
The vetting period for each node will be longer in the same amount of times as a number of nodes: the vetting node will get lesser data than a vetted one (5% at the moment), but since they are treated as a one node, this small amount of data will be spreading across them. To finish a vetting process each node should pass 500 audits on each satellite. More nodes - longer a process (up to infinity).
A 6 drive setup would change my advise. At least if you plan to use internal drives. In that case I would recommend putting the drives in a RAID5 array and setting up one node on it. You are protected against a drive failure that way and don’t have the downsides @Alexey outlined.
That is, if you are planning to use 6 drives you already own. If you buy new hardware, I suggest going with a single large drive and possibly add a second node on a second harddrive when it fills up.
I start with two 160GB SATA disks . Actually have 12 pieces (3TB total for StorJ operations) and few in my shelf, waiting for another controller.
My setup is easy: all harddrives have duplicate, is in RaidMirror (created a MD).
All MDs i use as a PV for LVM.
In LVM i have one VG for all LVs.
With all LVs I manipulate with on-the-fly as i need (of course, resizing must be dont without mounting, but this cost only few minutes). All my operations i have scripted, so loweing downtime as possible.
As @Alexey mentioned it, external harddrives is too risky to use (easy disconnect, no redudancy, …)
If one harddrive failed, all data is on second in MD pair. Should be easy replaced by similar or bigger capacity harddrive and let rebuild it on-the-fly.
If i not have free spare harddrive, attach another two bigger harddrive, create another MD, attach it to LVM as a PV and in LVM move all data from degraded MD/PV to new good MD/PV. After this just remove degraded PV from LVM without lost any data.
Good harddrive is usable as a spare for another MD. Bad is good for recycle .
If i need alocate more space to StorJShare folder (exist as a mountpoint), just stoping storagenode, remove it, unmount folder and let LVM allocate all free space to this LV. After that, mount it back and start storagenode with new parameters. Cost 5-10 minutes downtime, because thanks to LVM i can use many operations before and on-the-fly.
Similar is growing - just adding new MD and let LVM rise capacity.
Dont worry, have only 5 pieces, 4 is active in 2x MDs. You have right, it is scrap, but when it is still working, why i dont use it? Have setup monitoring for it health, hope that crash catch early.
My apologize, start with 2x160GB as another project (learning software raid). Later i add another pair, later add another 4x500GB (2x 500GB MDs), later 2x320GB (1x 320GB MD), later add 2x1TB (1x 1TB MDs) … Yes, starting looks like “discount from scrapyard” ;).
So, adding still pair of physical devices, still in redudancy. I know, that exist chance, that two harddrive can crash in same time, but if is drives from different pairs, i am still good :).
Now have bigger case, with more harddrive slots, but waiting for another controller (actually hit limit of onboard controller and add-on PCI-E controller too).