Audit system improvement idea

My question here would be what happens to a storj operator that has been loyal that has only ran maybe a few nodes at a time, Lets say after 2 years the hard drive dies, Myself included in this because Everything That I have made so far has gone back into my hardware to try to support storj the best I can.
If you spent that much time and effort into storj as a whole shouldn’t there be a way for you to keep going instead of getting DQed all together because of hardware failure? It wouldn’t really make much sense for someone who put alot of time into this project then after they loose a hard drive because hard drives don’t last forever, They have to start all over again with escrow and vetting.

Yes, the satellite would have to send a full list of pieces + hashes. This is a lot of data and costs a lot of money. That’s why for garbage collection they specifically don’t send these lists, but use a bloom filter so that this transfer can be significantly reduced.

No that is a missunderstanding. I am fine with losing money to punish SNOs for losing data because of user errors. The argument you would get DQed because of a bad HDD sector is not correct. The repair service is able to deal with that without DQing storage nodes. You need to do more than that to get DQed.

Lets say you are running 5 nodes without a raid or 4 nodes with a raid. Losing one node will cost you maybe 100$. Can you earn 100$ with 1 additional node in 2 years? How expensive is one additional hard drive? I have the feeling many people in here overestimate the hold back amount. Lets say you get 10$ each month. In 2 years that would be 240$ total and only 22.5$ will be hold back. That is less than 10%.

1 Like

The node gets punished, not the operator. They are free to start multiple nodes and each node will have its own reputation. You can also start over and the new node will see no penalty.

Its not really the hold back that is the biggest deal breaker its the vetting with more then one node, I can keep adding more hard drives but the problem is I will only add another node once the hard drive fills up. Or else the vetting process will take forever, Maybe in the future instead of all nodes being separate entries it could have 5 nodes per cluster Max.

It’s not… you’d be back to the vetting process and start over at 0 data stored. Additionally the plan is to increase that time frame to 15 months eventually.

If you’re going to make a claim like this you really have to back that up with some data. Because this is frankly insane considering that most HDD’s these days have less than 2% annual failure rates in the first 5 years of their life time. Unless you believe that suddenly spikes to over 80% because they are older disks, this statement is obviously false.

That’s true, but without doing a graceful exit every once in a while and instead waiting for the disk to fail you’d lose 50% escrow, which can be a large amount of money. Personally I would do a graceful exit as soon as the node reaches 15 months age

I would recommend you calculate the hold back amount of your own node first. It is not as huge as you might think.

It’s not just the held amount. Let’s say my node got disqualified today and I started another one immediately.

  1. First month is going to be the vetting process. Pretty much zero traffic and zero tokens.
  2. After that, the node starts with (almost) zero stored data and very low reputation, so it is going to get less traffic than my old node did.
  3. At the same time, the escrow percentage will be high since it is a new node.

So, losing a node would likely mean a few months with almost zero tokens (and traffic), after that, a long time until the escrow percentage drops.

While I understand that this is necessary to keep the really bad nodes out, the rules IMO are a bit too strict compared to the recommended setup. This does not include the uptime requirement, which, IMO, is way too strict (at least the official one).

The node cannot follow recommendations - the operator does. And the recommended setup is pretty much guaranteed to fail because hard drives do not last forever (at least with RAID I can keep replacing them).

After getting the 50% back, wouldn’t it still be 25% of total earnings of the first 9 months?

EDIT: Well, you’re right, that may not be a lot depending on how the network is used. Take average earnings of $20/month, that would be $45 escrow which is not a lot of money compared to having to restart a node. If it ends up being used like January that’s roughly $200 in escrow.

Sure a single HDD is guaranteed to fail. And a SNO is guaranteed to die at some point. If you don’t include the average timelines at which these things happen it always seems bad. @littleskunk perfectly outlined a few posts up that making money on all disks is more profitable even if 1 disk would fail. But the truth is that in all likelyhood, you’ll be fine for many years with most HDD’s.

I mean calculate the USD amount. For my node it is 100$ and that is including surge pricing.

My nodes data doesn’t go back far enough to calculate it from the nodes end. :wink:
But from what I hear displaying the escrow on the dashboard is already planned. I know for sure that my escrow is a fraction of what my node made so far though.
The only reason I’m using a RAID is because I’m using an existing volume that has space remaining. I would definitely run separate HDD’s if that wasn’t the case.

In which case the recommendations should say something liek that “do this and you will be fine, but we will randomly DQ your node once in a while so you can start over and sit with almost no traffic for months”.

The simple setup could be shown as the “minimum viable” setup where you may get lucky or unlucky, but it is cheap, while a “recommended” setup should be the one that has a good chance of surviving for, let’s say, 5 years.

Great, cause a single HDD should manage that easily on average.

And not a single bad sector in an inconvenient part of the filesystem metadata for a hard drive that has warranty for 1-2 years? That could happen. I do not know how likely it is, but for some reason the cheapest drives are not used in datacenters, especially not for a bit important, without RAID and backups.
Still, if I follow that recommendation and the drive fails the failure is on me, even though I followed the recommendation. Which is, IMO, kinda bad as a recommendation.

My escrow is somewhere in the $400.00 range $249.25 calculated after 6 months.

Running a separate node for every single drive is a terrible idea from a SNO’s perspective. That’s basically guaranteeing you loose money. Eventually that drive will die costing you your monthly income, escrow, as well as the initial cost of that drive. Then you have to buy a new drive, wait for it to be vetted, then to start filling up with data again before you get back to your expected monthly income. If it’s just an amusing hobby to you and you don’t care then that’s fine. Otherwise the only reason I would ever run a single hard drive per node is just as a cheap way of starting seed nodes. Once the node starts filling up and making a profit I would move the data to a raid array and use the single drive to start another seed node. Besides, nobody in their right mind doing this on a large scale wants the hassle of keeping track of 10’s, 100’s or 1000’s of individual nodes. The only reason I can see Storj recommending this setup is simply for their benefit which there’s nothing wrong with. It’s the fastest, simplest and cheapest way for the average Joe to get a node up and running, and that’s what Storj needs. If Storj recommended using RAID, then everybody is going to expect support from Storj on how to go about doing that, as well as turn some people off to Storj by over complicating things for the average person. It would also hurt the amount of initial storage capacity available by increasing the investment cost to SNO’s. However, there’s nothing preventing those of us who can and want to take it a step further from doing so. It’s just in Storj’s best interests to keep it cheap and simple for now because let’s face it… eventually storage will go the same way as mining and be primarily concentrated around larger players, and those players will not be running one drive per node. Once the real profits start rolling in (fingers crossed), let’s face it… the little guys won’t matter so much anymore. Unfortunately that’s just the way things work.

2 Likes

It’s important to note that an IPv4 node within the same /24 subnet is going to receive less data than a node in a less populated subnet. So, if one is running 100s of nodes in the same /24 subnet, one is “doing it wrong” … However, centralization or at least silo-ing of Storj nodes in data centers is inevitable.

A data center is already running the correct hardware, and running a Storj node will help the data center offset some of the sunk costs in running the correct hardware which is always going to be under-used.

However… I have learned a lot since starting my own node. And running RAID seems to be a bad idea. It’s ZFS with 3 or more drives… of the correct specs…

In the last few weeks, I’ve also come across Ether-1 which I hadn’t seen before… So, there’s competition out there. As SNOs, it’s probably a good idea to setup a decent file server and run a few competitors simultaneously.

Good point there too. I have redundant connections from different providers so I run nodes on both as well as other locations. If I used the one drive per node method I would have a mess on my hands.

As for existing data centers, I’m sure there are some out there who will run a node or 2, and yes there is competition as their should be, but based on what I’ve been seeing it appears that the demand for data storage is growing at such an exponential rate that data centers can’t even keep up with it. Keep in mind, people hardly ever delete anything and with machine learning and quantum computing becoming more of a thing, that demand will just skyrocket.

As for RAID, I assume you mean typical hardware raid being a bad idea. Personally I run 12 disk raidz3 vdevs which works out great for me.

I also agree with the idea of diversifying but I wasn’t going to be the one suggesting it on Storj’s forum, haha. Either way though, I’ll keep expanding as space fills up so it’s not like Storj looses out here in my case.

1 Like