That is the reality of running a real business. You don’t buy cheap garbage, you buy something that will have the highest ROI (long running HDD with low energy consumption), I also factored in everything it takes, a rack in a real data center, labor, network.
That’s not my point. If you buy 1000 disks, you can get it for far less.
Anyway, you’re kind of wrong. You buy the highest ROI, right, but sometimes the highest ROI is garbage (short running HDD’s).
Last time I read Backblaze report on disks, they presented data that showed Hitachi and Toshiba disks had 10 to 100 times smaller failure rates. All things considered, they said they prefer changing Seagate and WD disks frequently. It’s cheaper than buying the Japanese disks.
Lucky for us, the prices are around the same if you just buy 1 disk. Turns out when you sell garbage you can make huge discounts on large buys. Hitachi just can’t do it because they don’t manufacture garbage…
Actually these numbers are based on high quality enterprise systems factoring in discounts that I am familiar with running much larger operations. $5TiB/month is all in, not just disks but running a storage business. That is a typical break even for a business running 10PiB of storage (assuming the disks are nearly full).
You’ll earn back your investment in roughly 20 months. I don’t know what HDD’s you buy, but that HDD would still be in warranty if it fails at that time… Most HDD’s easily last 5 years. And usually much more. In 5 years that $400 HDD will earn you $3000. Subtract $400 for initial cost and $300 for energy cost (using $5 per month for that), that’s $2300 of profit. And every additional year would earn you $800 - $60 in energy costs.
So what part doesn’t make sense?
Furthermore after 34 months that HDD is full and you have made $1200 -$400 - $170 (energy) = $630 profit. Plenty to buy a new HDD and keep making even more profit on 2 nodes now.
Of course all of this feels a lot more manageable if you can start with HDD’s you already own and only spend money earned with Storj (which is what I did). Especially if you can long term use that HDD space for personal use as well (which at this point is only partially true for me).
I’m gonna outcompete you… I have no costs other than HDD purchase and energy use. And the above calculation assumes Exos HDD’s with a 5 year warranty. Not cheap garbage. I won’t have PB scale, but 100 node operators like me would. And we’d make plenty of money while you lose money with your estimate.
Unfortunately I couldn’t be at todays twitter space live, but I listened back when I got home. I want to post some stuff mentioned that was relevant to the discussion here. But I recommend also listening to it for full context. I would hate to quote people out of context, but there is always a danger to that when picking snippets. Listen to it here: https://twitter.com/storj/status/1593695583335653376
Thanks @john for providing some more context in the following quotes.
Twitter transcripts aren’t perfect, so I corrected the text between brackets.
I think that what everybody expects is certainly that [we] will make a change to what we’re doing with the [held amount] and then also what we’re doing with the the amount of money that we pay out for egress currently
So here we have the first hints at changes that might impact Storage node operator payouts. Held amount has been extensively discussed in this topic, so we can have some ideas of what might be considered. Of course the more scary one in this quote is the mention of a possible change in egress payout. I’m not going to speculate here what that change would be (beyond what I already did in the top post), I’ll leave that up to everyone to do on their own. I would like to mention that John mentioned several times that the intention is to always ensure running a node is rewarding and that changes won’t be sudden and will be extensively discussed with the community beforehand.
how we’ll use surge payments
Possibly referring to a way to bridge the gap between current payout system and future changes or to manage supply in times of need. But no additional specifics were given.
The second area that that we are focusing on also is doing work around the actual unit economics of the storage layer itself and making sure that that we’re doing the most efficient things with the storage so that as an operator of satellites that it’s also rewarding endeavor for people generating business on the service.
Some of these things were also discussed in this topic. Like changing RS parameters, segment size and making repair more affordable are likely part of this. But again no specifics beyond this were mentioned. But I’d say, keep the ideas coming.
And then the last thing is, is also looking at at what we’re what our, what our prices are publicly in terms of what we’re selling, what features are valuable and customer feedback as well.
because in some cases what we’re offering is uh you know at at a very reasonable price today it’s a huge value added service and there isn’t necessarily an analogy or comparison at that price point anywhere on the market today and so there there may be opportunities where we’re we’re under selling some of the things that we’re offering.
This makes sense, but I’d say it’s a matter of timing here. You want to grow popularity and recognition before raising prices especially on existing offerings. Of course new offerings with different requirements could be launched at higher prices as well. Perhaps in combination with some changes in RS parameters and segment size. Different products could be offered aimed to optimize for different use cases.
It was also reiterated that some of these ideas would be shared with the community in a little more detail starting in December. Followed by a white paper in Q1 with more extensive detail. So I’m looking forward to those discussions.
You’re comparing a held back that is built up only once with earnings that return every month. I know you’ve seen the calculation I posted now, so I’m not sure what part you aren’t getting about this.
1TB earns you roughly $3.30 every month but the held back for that 1TB builds up to $10 only once.
You will not be disqualified for transferring data to another location. Majority of data you can transfer online, only the last sync will require downtime, but it will be pretty short in compare with the main transfer.
See How do I migrate my node to a new device? - Storj Node Operator Docs
This assumes all the storage and egress is done by real customers (paid services). In the past there was a lot of test/dummy data filling Storage Nodes and uploading data. As of today, do we know if all the data traffic is paid or there is still test data paid by the Storj Token Sale reserves?
I recall actually being surprised when I checked these things. IIRC, a typical PC power supply is less efficient (80-95% at peak consumption, but even going for ~60% when severely underutilized) than a typical external drive’s (~95% at peak consumption, and these power supplies are sized so that they’re always close to the peak for the devices they power).
That was misstated by me. 20 months was the time to earn back your investment, not to fill the node. That’s roughly 34 months as I had already mentioned later in the post. This is based on my earnings calculator which was recently updated to reflect the latest network behavior. I corrected my original post.
The rest of the numbers I mentioned were correct. You make a lot of money on that HDD before those 5 years are up. Since I actually bought one recently, trust me, I did the math.
I stated my calculation. If it’s full after 34 months you use way more than half of the HDD in 5 years on average. Also… Why should I care if the ROI is 500%? Over those 5 years?
Sia hosts do not provide the same level of reliability. Part of this is quality of software, Storj storage node code is just so much better than Sia hosts. There seems to be some drama around that problem. Part is that the Sia network have different reliability standards, and with upload fees it reduces the incentive to provide reliable long-term storage.
Either that, or work on a way to use the uplink protocol in pure client-side in-browser Javascript, which would reduce the need for gateways. There are some problems needed to be solved to do the latter. I do not see this feature on the official roadmap though, I suspect there are enough customers satisfied with libuplink or gateways to fuel the development process already.
@BrightSilence has described his idea several times, one of them here. Held amounts were a recurring discussion topic for some time, so if you’re interested in some ideas, it might be worth spending some time browsing old posts.
If you are to migrate a whole disk to another, it’s better to copy the whole disk image, as opposed to copying file by file. This is because you perform a single sequential read over the whole drive capacity, and not tens of millions of small reads all around the drive. Copying a modern 18 TB drive this way would effectively take a bit less than two days, so well within allowed downtime.
Indeed. I mostly limited participation in discussions like that to correcting obvious ommissions in other people’s posts because of this lack of feedback. It would be nice, for example, to learn what economic model constraints are known to Storj, but missed by us SNOs in this discussion. On the other side, I understand that some of that knowledge may not be easy to share to keep competitive advantage. I also absolutely understand the fact that it is difficult to formulate any public statements, however positive they would be meant to be, without triggering some backlash. I’m therefore fine waiting for the first drafts of the whitepaper as mentioned in the Twitter space.
On the contrary, it makes sense as a way to reduce power usage. A single large drive will require less power than many small drives. And we are in an economy model thread, making this difference crucial for the topics under discussion.
Besides, a single large drive is now cheaper than many small drives (both in terms of per-unit cost, but also the bay capacity in a device they are mounted in). When consolidating a SNO may sell the older drives, recovering some costs.
I didn’t use your misstated period to fill a 20TB node. I used my personal experience. 60 months for 20TB. Let’s call it “statistics performed on a population of 1”
You might be doing statistics on older network performance. I caught the train when it was slow moving. Also, I’ve noticed that recently the ingress is much much higher than I’ve ever seen.
34 months to fill up is indeed much better than average 1/2 use over 5 years. But still, you’re wearing down a 20TB disk with little return (when compared to a full 20TB node) for ~2 years. That’s 40% of its life (I did come up with the notion that a disk would die on the day of its 5th anniversary).
The disk will probably not die after 5 years, but it could also die before it’s 5 years old. The warranty thing is a probability result calculated by the manufacturer (cost of replacing during 5 years versus cost of not selling so many disks if the warranty is just 3 years). And keep in mind that the data is not under warranty. When I say 20TB is all you can have, using your calculations, I mean that you can’t hold more than ~20TB on a single IP/24. When you loose the disk, that’s game over for you.