So Are You Saying There's a Chance?

Hi helpful people! Its been a week from hell, hope ya’ll have been better!

I just pulled a dummy dum dum, and tried to change x2 drives out of a raid5 at the same time (don’t ask). long story short, it was not rebuildable, but i did pull as much as possible off of the drives with some fancy restore software.

I had my .DB and Ident backups, since im not a total idiot…and so the node is now back up and running…kind of. it took me 7 days to bring it back online, and i am confident there are many files either missing, or messed up in some way (i had to rebuild my config.yaml as it was blank, for example).

The logs are showing PUT and GET actions, but sometimes a “download failed…invalid (or) file does not exist” error is happening; among other similar errors regarding missing or messed up files it cant delete.

Anyone have any thoughts if this node will still stay alive over the next while? Or is it in its death throws and will be fully disqualified soon (frankly, i was surprised it wasn’t already)??

If it does disqualify itself in the next few days, would it be worth doing a graceful exit or just scrap it and start over? I’m not worried about the held money part, but more concerned on the whole of the STORj network and learning how to deal with issues like this if they were to happen again down the road.

I am willing to work with the devs if they want to troubleshoot or work in this scenario!

(edited for spelling and grammer)

I don’t think there is much you can do except wait and see.

Any audit failures?

not full audit failure’s, but 4 are at 99.8%.
Suspension: 90% - 95% for x3 servers
All online: around 70%

Looks like i am missing around 750GB from where it was before (according to the dashboard)

I have a feeling those numbers are going to drop as the node gets run through…im not holding much hope…but there is a little.

All I can add is that if file loss is less than 2% your node will survive. If it’s more than 4% it will die. Anything inbetween will depend on luck.

If the loss is as big as you said, I’m afraid it’s the end of the line for this node. It kind of depends whether you’ve lost data on all satellites. If I were in your position, I’d start a second node right now. Perhaps you can get close to being vetted before the other node is completely gone. Or maybe if some sats have data mostly intact, you can limit the original node to just those sats.

Graceful exit won’t work if you have lost too much data. You won’t get your held amount back.

Sorry I couldn’t give better news, but data loss is just a big no no.

3 Likes

Day 8 (after I messed up my storj node)…so continuing my journal of errors…like the poster above me said, this probably wont be able to survive, but I am here for the journey; Call it R&D

My node is still making money. It hasn’t crashed any more (had some files on the wrong drive that initially). BUT, the suspension is down to 80% on a couple nodes, although, audit has leveled out at 99%!

My hope is that i can keep this node, even if i don’t make any money for the next couple months (i’ll take the hit for the rebuilding and repair), it would be good to see if this system has a good way of repairing issues like this. Not all the data was lost, i still hold data for customers…just not all of what might be expected by the satellites.

I will keep this journal updated as things get worse/better…for science!

thank you to who has been weighing in so far!

3 Likes

I would definitely keep the node running too, as long as it survives :slight_smile:

I’m surprised that your suspension scores keep decreasing though: if your system is back online and working although with lost and corrupted files, I would expect your audit scores to drop as a result, not your suspension ones.

Maybe this should be investigated? See:

1 Like

You’ll make money as usual, but your node is going to free up some space as many pieces have already been relocated to other nodes of the network during your offline period.

Oh don’t worry for the Storj network, it can cope with lost files and even lost nodes without a problem. It’s been designed to be incredibly resiliant to data loss.

Is this still the case? I mean did you actually lose that much in the end?
If that’s the case I must say I doubt a node can survive such a data loss, I agree with @BrightSilence.
Unless your node was initially massive, like 40TB or so…

1 Like

That’s not possible on a single node atm. I think the largest nodes are about 25TB right now. In which case it would put you right in the danger zone of 3% loss. You might get lucky and outgrow the issue before disqualification.

@miicar204 how big is your node? Anything over 4% loss is most likely a death sentence, but if it’s close to that, it could take a really long time to actually being disqualified. My advise remains to start an additional node now, that way you don’t drop to nothing by the time it gets disqualified.

1 Like

Yea…so its a small node, my first one that i built for proof of concept out of random and discarded drives on my home server. I was in the process of putting larger drives in and expanding the available space when i effed up (friends, don’t do RAID upgrades at 3am without notes, by memory). i guess i’ll hold off on the rest of the upgrades till i figure out if this will even exist in a month.

Its not large…i probably kept 4% after the restore…lol…yea its gone.

BUT, atm, i am still getting uploads and downloads - starting and finishing…(along with download failed errors every log 100 entries or so).

Anyone ever have 2 nodes reaching on the same HDD array? i’ll probably make a second node (was planning on doing this for another site anyway…), and hold onto it till i figure out if this OG one is going to stay around or not.

So it means it lost 96% of files? That’s weird, I would have expected the disqualification to be way faster in such a scenario. We’ve seen disqualifications happen in less than 24h in the past, but that was before the new audit system though.

There must be something I don’t get because this suggests only 1% of lost files… ^^’

I think all kind of setups do exist, but some are better than others :wink:
It’s usually not recommended to have 2 nodes on the same HDD (and technically it goes against TOS):

I personally do have a disk that runs 2 nodes: 1 main node (4TB) and a small node (500GB) beside it that I kind of “incubated” so it’s ready to be deployed/expanded in the future.
I would not recommend doing that if you have other options because whenever the filewalker hits both nodes at the same time, the disk is crawling under heavy IO load (I believe RAID arrays do not help with regards to IO loads). Also, lose the disk, and you lose both nodes (although I guess a RAID5 kind of array is supposed to prevent that from happening).

1 Like

Since its a Raid5 array, you are technically dedicating more than 1 disk per node, so its not really against the rules in a literal sense? I’ ve run 2 filled nodes on a 4-disk raidz1 array without issues. So you should be fine (It is not optimal though)

In your situation, I’d set the maximum size of the old node to 0. This will stop it from wasting ingress, while still make the most of it, before it gets DQ.
In the mean time start a new node, the faster it runs the better :slight_smile:

Sidenote: Might be better to move away from Raid arrays, more work for less diskspace. I moved all STORJ nodes from raidz1 to ext4, just serve all 4/8/16 TB disks now fully.
If a disk dies, so be it, theres X more, running, issue free nodes. Claim warranty if I can, and replace it.

1 Like

As a SNO, this attitude bothers me (just saying); and why i use raid, took a week to recover as much data as possible from my f-up, and will employ any other ways to protect customers data. Am i trying to be the most reliable storj node on the network??? !!!HELL YES!!! As a SNO provider, that SHOULD be our goal!
I know i know, the system is set up to heal itself, but that causes strain on it and it shouldn’t be the go-to fall-back. No hate, you do you, and if the TOS allows it, have at it! But for me and my servers, i rather be more diligent.

After careful thought, i chose raid5 as it wastes the least amount of HDD space (N-1 disks compared to N/2 for raid1 etc). That’s just my 2 cents on that, and why i run a raid5. (and yes i learned my lesson to double check things before i hit enter…and no, ive never done this before last years POC).

For now, we are just trying to get this to work and be successful, but in the future, I do hope STORj employs a blacklist for IP’s that repeatedly trash their nodes and just start another one (maybe a strike system per year??). Sure, that wont stop people from changing their DNS IP mask to get around it, but it hopefully will encourage people to treat customers and their data with more respect in general. As SNO, we are paid for providing a service, and as a service provider, we should want to be as reliable as possible (within reason of course…and why Im using existing infrastructure and old decommissioned HDDs till the payout is worth investing more).

One could argue your diligence caused you to lose 100% of data the moment this node gets disqualified, instead of only 1/N-disks of the data.

That aside, while I admire your altruistic attitude towards providing reliable service, the network can’t be and isn’t build around that assumption. You have no other obligations to the network than what is outlined in ToS. And you’re expected to do what is most profitable within those confines. Unless you have lots of HDD space to spare, running individual nodes on individual HDD’s is simply more profitable and it also happens to spread the risk should a problem arise. Many SNO’s underestimate the risk of user error. Using RAID compounds that risk and ensures that should something go wrong, all is lost. I’m not necessarily convinced this is in fact better for the network in the end.

I also have my issues with RAID5. Best case while rebuilding or expanding that array, it’ll be just as reliable as RAID0. Worst case the RAID array fails at the first unrecoverable read error during a rebuild. With chance percentages that are in many cases higher than 50% on large arrays. I would argue that on the network scale RAID5 likely doesn’t provide more reliability than individual disks. Which is why I either run individual disks or RAID6 (equivalent).

That said, you’re allowed to run your nodes the way you want. Just don’t overlook that you are consolidating risk into an all or nothing thing, which makes user error a lot more dangerous.

4 Likes

the RAID5 volume is treated as a one disk by your OS, right? So you run two nodes on the one big disk. I wouldn’t say that is compliant with Node Operator Terms & Conditions.
So please do not encourage others to break rules.

if you make so, the node will be DQ (it will have less healthy data because of natural deletions by the customers and the percentage of missed data will grow). With the ingress traffic it could survive (the missed data will have less percentage and could help to survive). Missed data will be eventually recovered to other nodes and this node will not be asked for missed pieces after that. But it could take time before the amount of healthy pieces will reach a threshold though, and node could be disqualified before that.

2 Likes

Is that new? That’d be nice! But I’ve always been told that a failed audit does not relocate the faulty piece elsewhere (unless the rebuild threshold is hit), making it possible for this piece to be audited again later (even though chances are very low).

Edit: In fact, I guess that’s what you meant by “eventually”.

No they will not be replaced immediately, only when the amount of healthy pieces for the affected segment will be below a threshold. Please note - your node usually stores only one piece from 80 for the given segment.
So, big loss meaning affecting several segments, and unlikely they will reach a repair threshold in the same time, thus it could take a noticeable amount of time.

2 Likes

UPDATE: So x4 of my Satellites are in “disqualified” status now (not at all surprised). Ingress and egress is still happening on the node, but i fear the end is nigh!

Thank you to @Alexey for clarifying some things i already assumed from reading the TOS.

I am curious if doing a graceful exit right now would be a good idea, or just let it ride for a month and see what happens?

There were allot of files that were “restored” but not the correct size or content (as can be confirmed by some of the log errors stating error in expected size)…if this node does survive, what will happen with those damaged files? Does the storj ecosystem have a way to clean up broken files/trash, such as this, eventually?

Graceful exit won’t really help you, you’ll get disqualified for losing the files you should have anyway and you can’t exit on a satellite you were disqualified on.

The files will not be removed until they need repair on the network as a whole. And even then I expect they won’t be immediately removed, but only when garbage collection kicks in. But I wouldn’t worry about that, it’s either an insignificant amount or you will be disqualified. There are no other options. As for file health on the network, it’s not a problem. The network has plenty of redundancy to ensure the file remains healthy.

1 Like

Yea i re-read this and will just leave the node open and running at minimum space allocated (more then currently used of course). maybe i’ll get lucky. Will report back in a week (for posterity if nothing more).

2 Likes

UPDATE:
Well, like we all expected, woke up this morning to all 6 Satellites are now showing as suspended! Ingress has slowed down to a couple MB a day; Egress has gone from 1.5GB, to about 0.5GB in the last couple days!
Logs now showing (GET) errors for pretty much every entry beside the few successful “delete piece sent to trash” logs. There doesn’t seem to be many PUT actions (if any at all).

I’ll probably leave this up for another week till the start of next month, to see if i get any of the $0.23ish of this months earnings. Even though there isn’t a lot of data left on here, i think on the first of the month, i will do a graceful exit of this node, so i can use the space for the new node i started.

I’m glad i do the silly, risky stuff on my own proof of concepts, and take my time to learn for my production nodes…most of the time :wink:

1 Like