First try running storagenode was bad

I set up and ran 40tb server. I suppose i had power down, anyway server restarted and figured that i forgot to set up autoboot firewall settings. I fixed that, two days later same thing happened, i forgot to set up autoconnecting to network. I am noob. So i look at dashboard and see data evacuation.
screenshot
Is this recoverable or i better go with new identity?

What are you asking? Is the node DQed if not Uptime only affects rep as of right now.

Reputation is recoverable?

Yes it is overtime how old is the node?

2 weeks :heart: :heart: :heart: :heart: :heart: :heart: :heart: :heart:

Yeah your safe dont worry about it. If it was based on uptime then Id worry but your ok. You dont need to start a new node. Just dont fail any audits then you will DQ instantly.

Thanks a lot! How do i know i’m dq?
Also is there a way to reconfigure disks to raid without dq? As of now they are fresh and young but the time will come eventually

No you would need to back up your data if your planning on reconfiguring them. You will know in your dashboard if you check each satellites if your DQed or not.

do you work for storj.io?

Nope just here to help support other SNOs to help relieve the load of every person off the people who do. If I can help with what I know then I will.

4 Likes

If you meant that on the disk space used graph it looks like it goes down, you should know that it doesn’t really. The graph always goes down at the end because that last day is not over yet. It does not mean that data is being removed from your node.

Depending on how they are configured currently, it might be possible to do this without any downtime.

For example, if the files are stored on a filesystem that’s on an LVM logical volume (LV), you can use pvmove to move the volume to another device even while the filesystem is still in use.

If you have a second physical disk you can create a degraded RAID1 (mirror) on it with mdadm, run pvcreate on the mirror device, vgextend to add it to the LVM volume group, pvmove the volume into the degraded mirror, vgreduce the unused PV out of the volume group, then add the device underlying the old PV as the other side of the new mirror.

Note that this only works if the existing LV doesn’t use all of the available space on the underlying device, or it won’t fit into the mirror since a small amount of space is taken for the md-raid metadata.