I feel like nothing new is coming out anyway so I’m gonna just pass on this topic. Anyway I think there’s been plenty response from SNOs and I think the feedback is clear to Storj. I’ll be waiting for more official announcements.
I don’t know Mr. Cutiee Pie, but I’m sitting on a €200 NAS with two 8Tb disks. I barely make $25 a month…after 2 years…
And I’m scratching my head not to let him die under the blows of various filewalkers…
And I’m not French…
0.5W per TB is possible
In the past they just put you on a waitlist and gave the activation code only when they decided that they need more nodes on the network. The same thing can be brought back I think.
Because some information was hidden from us. Or maybe Storj did not know it themselves at the time.
For example, nobody knew that the expensive-to-run edge services were the main way to use the network.
Also, the hope was that with lower prices there would be enough traffic to get good payouts. In February, my node uploaded 684GB at $20/TB and got ~$13. If the rate was dropped to $5/TB, but the traffic went up to 6TB/month, then I would get $30 and be happier then with the current rate. Of course, if the rate is dropped and traffic stays the same, then I get $3.25 and am unhappy.
Similar with the payments for storage, though in this case if the demand for storage went up significantly, I would have to buy new drives and expand the pool, but if traffix went up with the amount of data, it would most likely be fine.
Yeah sure… with a single server running at least 100 20 TB drives!
Yeah because clearly they didn’t understand the market and how people think. Is it the easiest way to use Storj? Does it cost more? What did they think was gonna happen? This stuff ain’t rocket science. Having the foresight to see this sort of thing should be built into the business strategy.
I totally understand the hope here, but Storj should have known better.
I’m sure the edge services initially started as something temporary, to get the customers try out the network, before switching to the scalable, end-to-end encrypted ways to access it. However, there is nothing more permanent than a temporary solution and now it seems that almost all customer access goes through the centralized services.
Umm… yes?! Are you serious?
Please enlighten me.
Again, seriously? I’ve already said multiple times in other posts that this is a joke. The idea of ONLY using UNUSED space? How much “unused” space does the average Joe have kicking around inside their computers? Now don’t forget you can’t really include laptops since most people don’t leave them on all the time. Let’s say ALL 22,000 nodes were individual people the 500 GB of unused space. Thats ONLY 11 PB! That wouldn’t even store what is already on the network! Now realistically that 22K is probably more like ~3000 individual people. That’s only 1.5 PB. STORJ WOULD BE DEAD IN THE WATER!!! Stop it with this “unused” hardware nonsense, it’s a f*ing legal disclaimer. Storj knows everyone is buying hardware. The node software isn’t even optimized to run on anything other than a dedicated drive, come on… Just to accommodate the current data on the network you’d need some 38,000 actual individuals running on “unused” space, and to earn what… $1.50/mo? Give me a break, a monkey with half a brain could figure this out on a table napkin!
Then how in the hell do all the other companies charging $23/80 stay in business? Backblaze should be ruling the world if it’s so great.
Fair…
See previous point… with one more point being, Storj is a middleman. Backblaze runs their own hardware optimized to be (almost) as cheap as possible.
No they don’t. And how does 4 TB qualify as “unused” by the way? Maybe in rare cases, but most people don’t buy 4 TB worth of space without plans of using it for something.
But I thought we were talking about “UNUSED” and “EXTRA” space here. Most average computers aren’t going to have more than an extra 500 to maybe 800GB of “extra” space. How freaking long does that take to fill up? And nobody cares about 500 GB. It’s not enough, and never will be enough for anyone to waste their time with. None of this makes any sense for a sustainable business model. Sorry, but it is what it is.
What oversupply we have? No one pay for free space, so big amount of free space not increase price. Clients and Storj pay only for used space and bandwidth. So free space is only decision for this space provider, because it cost energy. Storj always told that if your first node almost full then start next no all at same time, then no need to complain that it eat lot of power and no space used.
Today \24 ip getting around 50-60GB ingress a day so it fill fast.
so i do not see any reason include lot of free space to economical problem today at all.
Main problem is used space price and bandwidth price lets concentrate on this one.
I’m the avg user and i don’t even have 50gigs free on any computer i own which is why i has to buy dedicated hardware for storj. Cause i got lots of pron stored. An i gots to use storj to store the rest of my prons cause i’m too cheap to buy hd for myself.
So storj pays me to store my prons on storj.
Without bought hardware there is NO market.
Can’t disagree there.
I’ve seen differing opinions on the performance. I believe there are strength and weaknesses, but I don’t believe the strengths are being used to their full potential. Also, Storj could develope front end applications but they don’t seem to be interested. I also don’t see it being difficult for Storj to meet and probably excede regulatory standards in time, but currently this is mostly fair to say.
Probably to turn SNOs against each other for the blame and direct our attention away from Storj having a shitty business model.
Not sure I follow… why would you need multiple VPS’s to fill 500GB?
Ahh… I see now. You consider your 30 TB “unsued” space and are upset about how long it takes to fill THAT up. Go set up a VPS or two or 15 then, everybody else apparently does. By the looks of it you could probably fill that 30 TB in a month or so with about 15 of them
I REALLY hope I’m wrong because this really is a great idea. They just have to figure out how to make it work.
I have not done any real testing myself, and even then I’m limited to a gigabit connection anyway at the moment. But some of the numbers do look impressive. And I imagine they would only get better as more nodes come online.
Being end to end encrypted I don’t think it should be difficult… however due to regulation procedures and all that maybe it will take time for regs to catch up to the tech.
Gotcha!
Must be a trade secret or something.
Almost nobody would bother with it. 500GB would get you $0.75/month for storage and something like $2/month for egress if you are lucky, probably less. This is with the current rates, with the proposed rates it’s $0.5 and $0.5. Due to the transaction costs, such a node operator may see one payout per year, except the first year (held amount).
Setting up the node, giving it space you will not be able to use later (node cannot easily be shrunk) to get $12-$30/year (in tokens, converting the tokens to fiat involves more transaction costs and also exchange fees).
Maybe some people would bother with the effort for that amount of money, but I suspect most wouldn’t. Maybe some people really have 10TB of free space on their always-on NAS that they plan on never using, but I suspect most don’t.
Not really. I have already explained above - no one reads.
- During the life of the project, pools appeared with dozens of nodes for hundreds of TB. They will be profitable even with the new prices.
- There is a lot of decommissioned server equipment, hard drives and free electricity in the world. When all this finds each other, it will be able to work perfectly with a payback period of 2-3 years. This is “long money”, which is also in bulk in the world.
- As I understand it, the average ratio of the number of nodes to wallets is 10, that is, on average 1 operator has 10 nodes. Nobody cares about it.
- The network floods synthetic data - it will flood any amount on any number of nodes, it is so arranged. If the number of nodes is not reduced, the token pool will end.
Based on the above, it should be understood that the question of “profitability” refers more to the owners of large pools, and not to the network. The network does not need 22 thousand nodes to function - it will work with 2,000 nodes.
Hence this post:
2 to 3 years? Hardly seems worth doing with decomissioned hard drives. Pools appeared with that many TB because it’s profitable, not because it’s barely breaking even. And sure “long money” is great and all, but if it’s not earning anything reasonable it simply ain’t worth doing. If the profit margins are so low that you need free electricity to pull it off I can’t see anybody investing even their time into this unless it’s some retired dude that happens to live next to a hydro plant and has access to tons of decommissioned equiptment. If there’s no bait on the hook the fish don’t bite. Now if the problem is synthetic data then f*ing stop the synthetic data… It’s not a hard concept. But that’s not the whole reason for the token pool drying up. There are many other factors at play there too. You make it sound as if the 2-3% Storj is trying to save by cutting SNO payouts is going to save the token pool from drying up. Not gonna happen.
Yes, I totally agree with you, but don’t forget that the world is full of pensioners with a ton of decommissioned server equipment that they got for free, living next to a power plant with free electricity. And they will always be in profit. Protection /24 was the only thing that stopped them, but IP addresses have become much cheaper over the past 3 years.
Btw.: seems to be the longest thread in the forum, no?
Never seen such activity here (in the forum) before.
It’s run it’s course. It’s just a free for all now I think, haha.
I suppose your right there, but there would still be far fewer nodes and fewer data connections contributing to the one real advantage Storj has to work with.