I started my node project with a new identity roughly 33 hours ago and “docker ps -a” give me the following:
5fa7d6d84a4f
storjlabs/storagenode:latest
“/entrypoint”
Created: 31 hours ago
Uptime: 13 hours
So there was something definitely wrong with its active status, so I went ahead and check the logs - here is the output which I managed to gather using “docker logs” and “docker events”:
For some odd reason a signal from OS was given to close the connection:
2022-03-10T04:57:31.839Z INFO Got a signal from the OS: “terminated”
2022-03-10T04:57:46.846Z WARN servers service takes long to shutdown {“name”: “server”}
2022-03-10T04:57:46.852Z INFO servers slow shutdown
Which has normally resulted in failure of uploading information I suppose, not entirely sure on the below:
I have kept the node “stopped” for now, and I would really appreciate if someone can suggest how can I troubleshoot this in depth, as this has occurred more than 2 times and I can’t seem to start the project as normally.
Following the steps, watchtower was also set using the command in the manual, as I’m not entirely sure that the below difference in uptime should bother me:
I’m not really an expert on this, but the first thing that came into my mind was if your HDD is SMR and not CMR as would be triggering an I/O overload killing the service for not responding…maybe nonsense.
Let’s wait for expert @Alexey
I have configured RAID-0 with 4 HDD 3.5 drives to work in sync, for a total of 2.4TB and honestly I have never though on what they might be in terms of how they record data, I might need to check on that perhaps on their physical label or online.
Would it be a possibility, that my RAID setup is being the problem here?
In the meantime, if anything else comes to mind - please shoot straight away.
Being RAID-0 is not really the issue; I operate 5 nodes totalling over 40TB and they all are in RAID-0 without any issue so far.
Maybe you can check if it’s SMR by the Part Number.
Also, are you aware if you have bad sectors on them?
Bare in mind of RAID-0 issues: if 1 HDD caputs, the whole node is lost.
another issue with a raid, is that the io is equal to 1 hdd max, ofc there are some bandwidth advantages and such.
but since all disks need to write 1 io for the raid to store one data block, then it becomes a pretty simple calculation.
I haven’t used RAID0 in a very long time but this doesn’t sound right. Depending on the method for creating and managing the RAID and also the way in which files are written it could be possible to write to all the drives concurrently, but this would sure take serious tweaking. Even so if you write to a 10k drive instead of a 5.4k drive the speed should be bottlenecked by the RAID ‘controller’ or the drive being written or read from, not another drive in the same RAID0.
I will admit this is the case for RAID1 as you’re mirroring data across drives you are limited by the slowest.
Storj recommends using Docker Desktop CE v2.1.0.5 with windows if your Windows doesn’t support WSL2 or it is not enabled. Not sure if this is your issue or not, so check if WSL2 is enabled.
Any reason you don’t want to use the native Storagenode GUI for windows?
I second the idea that RAID-0 is not ideal as one bad drive will kill the entire node. In my opinion it would be better to run independent nodes on each independent disk. If you did decide to go with the Storagenode GUI, you can only install one node with it and would need to use a workaround like Vadim’s Windows Toolbox to install additional nodes. However the GUI is still the most stable option for a node on Windows.
I really don’t know if the RAID-0 can be the issue here. I will make sure that a check on the type later on, not at home at the moment. As for the sectors, not sure if it is relevant - but I ran a paid software which ensures that the all 4 of my HDD are getting a low format. They have been used on Desktop PC previously, and they are all with different writing speed. Otherwise, yes, I have chosen RAID-0 because I wanted to combine all 4 of my drives and run them through a single port of my PC. I use an Orico HDD Hub - too bad that the transfer of data is over USB 3.0.
Thank you for this explanation, perhaps I may need to switch to another more suitable RAID mode or just to simply run 4 nodes, each using 1 of the 4 HDDs - hopefully, for this I can only use 4 identities and still the same network for all 4.
Yes, I remember that I ran an upgrade - damn, otherwise the Docker application on my Windows curruntly has WSL2 enabled, and I use Ubuntu 20.X as my distro.
Can you tell me a bit more about this “Storagenode GUI for Windows” or any guide for it?
If you do decide to migrate, note that the docker/linux versions require the path to include the folder called storage whereas the GUI on a new install does not, so getting paths correct is important. There is a guide for this here:
in the sense of io, all raids basically act the same, the data is striped across all drives of the raid, and thus each disk will perform 1 io or multiple for each raid data stripe written or read.
the stripe size also plays a role, which is why i said multiple io for each stripe.
lets use your disks as an example… a sector on most hdd’s today is 4k, it required 1 io to write a sector.
so a stripe size of lets say 64k would mean it would write 16k for each of your 4 disks, giving the total of 64k for 1 write… ofc these 16k writes on each disk is then 4 io, but sequential io is pretty fast.
storagenodes does a lot of writes, so mitigating that helps a lot, so ram on your raid controller or a write cache will do wonders for performance.
splitting the disks up would be a much better approach, as you will get the full read write io from each disk, raid isn’t great for storagenodes.
especially if you don’t have a write cache.
what stripe size are you running?
basically raid makes hdd’s act as one… which comes with some disadvantages, especially on the io side of things.
@baker - I may consider setting up the node as such then and try using the GUI instead, I previously shut down my node, so I will just start a new and see what will be the results of that.
Thank you for the information - I believe the stripe size 64 KB, following your suggestion on having the splitting of the disks, would there be an option of perhaps using another RAID mode or do you suggest to simply run nodes independently on separate disks.
One question about that, can I have 4 nodes for each one of my disks running on the same computer - is that recommended, as in one other threads I believe that I was told, that I will need a separate network IP for each node setup.
you can have as many nodes as you like, their behavior / ingress will depend upon their global ip’s
the network data distribution is selected by ip/24 subnets.
so basically if you set nodes up on one or more ip’s within a ip/24 subnet they will all get data as one node.
if you have ip’s across different ip/24 routed to each node then the nodes will behave as multiple nodes.
running 1 node on 1 disk is the recommended approach, and should be fine.
ofc raid can give one disk redundancy, to protect against disk failures, but that is really only preferable for older nodes, as it can take years for a node to grow to a decent size.
then it can be nice to know that the node won’t die because a hdd dies.
but for new nodes, running raid is unpractical.
and since your disks seem to have trouble to keep up when running in raid,
i would recommend that you create 4 nodes 1 for each disk, you don’t have to created them right away. you can simply add new nodes as the existing ones are filled. and given the size of your disks, you would be fine with 1 or 2 ip addresses. 2.4TB will at the current pace, most likely take like maybe a year to fill or maybe even less.
i got nodes from dec that are at 400GB stored, so thats like 100GB pr month, but early on while they are vetting ingress is reduced.
4 nodes on 4 different disk behind the one public IP will act as a RAID on the network level - they would spread the common traffic. In simple words - the traffic would be as there only one node.
Each of nodes will be used in small amount, if you start them at once.
So, the best approach is to run only one node, when it will be filled up or at least vetted - start the next one.
When the allocated space will be used at whole - the ingress traffic to this node will stop, but the egress would be exist. The customer can decide to remove their data, then your node would have a free space again and ingress will start again.
How do I go about getting this (/app/config) removed, so I can deploy the node?
Went through several threads on the matter, nothing seems to be working for me. I’m starting the node through the WSL v2. After formatting my drives I had to mount it (for some strange reason) as WSL was not previously detecting the new drive, did that, so upon listing it I see:
D: 932G 132M 932G 1% /mnt/d
I’m using SSH from my MacBook to connect to my Windows home PC, which will be running the node - I’m not at home sometimes, so this was a good approach to gain access to the machine, not sure if I can only initiate and set up the node through the machine itself, should not matter, right?
Let me know if anything else as information is required.
so basically you have to decide if you will keep the old one… or delete it before using the same location.
if you got any decent time on the storagenode i would recommend trying to keep it.
else it doesn’t really matter to much.
it should work just fine even after having days of downtime…
so long as you have the identity and the stored data