Well, I don’t know what you mean by eco design. For sure, a data center can be more efficient, unless you take into account that a portion of the resources would be running with or without storj nodes.
As for resilience, I started having some doubts when they dropped russian SNO’s. They can also drop some clients if the US government doesn’t like them.
This is not very Web3 like… decentralized, check; trustless, no check; permissionless, no check.
What would happen to the clients data if the storj team was assassinated by the well known terrorist group AFAS, "Armed Forces Against Storj "?
What would happen to the clients data if the storj team was paid off by the competition to drop the satellites and go live in the Bahamas?
Can the community (SNO’s) rebuild the satellites to allow the clients to be made whole on their data?
Just asking…
FYI at no time did Storj drop russian SNOs. Please check for example http://storjnet.info/ to verify this yourself, you may scroll back to last year and check every month. What we did do is make sure that our network would be resilient in case that internet access would be restricted for any reason in that region. However, if and when sanctions are imposed against any country by the US government, Storj Labs has to comply with them and will do so. In the case of Russia, the current sanctions do not require us to stop payment to Russian storage nodes.
What you described is kind of a drop… though making sure the network stays resilient in case of an internet restriction is a good thing. But anyway, it was not my intention to make you look like bad guys. I understand that you have to comply with whatever the US Government comes up with. I’m just saying this is not a good thing…
I believe Storj’s greatest strength is it’s ability to scale concurrent transfers to maximize any pipe size and do so at the same low cost. You can’t do anything like that on any non-distributed classic data center. They don’t have unlimited bandwidth and if you wanted to maximize what they did have it would cost a small fortune.
Storj should try and find what markets this is beneficial for, and focus there. The competition would only be among other distributed systems which would narrow the options and showcase Storj’s strengths rather than weaknesses.
Of course, this isn’t the only strength and all venues that can gain from Storj vs traditional data storage vendors should be capitalized on. But I think the concurrency is a WOW factor that can open a lot of doors.
I think Storj needs to focus more on the storage node operators with their pricing model. See what we get for free and what costs us, and where the difference lies when compared with datacenter competition.
What is free for me: Traffic. I have an unmetered connection to my home.
What is expensive for me: Disk space (yeah, bla bla only use hardware you have lying around, I do but space is shrinking because I use it, and it’s quite difficult to shrink a Storj node) and electricity.
So as a node operator, I need a payment per TB that I have lying around as wasted disk space, and I have a minimum income per month (around 5USD) under which it is not viable to run my hardware due to extra electricity cost from spinning the disks.
So from my point of view, Storj could profit by making traffic free and paying the node operators more per TB stored. That attracts bandwidth heavy customers that are lured in both by the high bandwidth available and the low bandwidth cost.
Why is there no incentive whatsoever for customers to use a native implementation instead of S3 gateway when this saves Storj all bandwidth cost on the S3 gateway?
Still paying some traffic fee would make sense - in my opinion - in this configuration, as it will make a difference to the network to operate nodes on 5 Mb/s and 1 Gb/s connections - in this scenario higher speed should be rewarded better.
Imagine all the AI rush caused by chatGPT - training AI models requires a lot of data which has to be processed hundreds/thousands of times throughout the model training process and of course the faster the better…
@Knowledge - do you have any numbers to share how storj speed looks next to “classic” data center speeds (e.g. AWS)?
But that’s just the point: that sort of bandwidth is only available to “the select few” with very deep pockets when on AWS.
Storj democratises that sort of speed to anyone with a pipe that’s fat enough.
Storj Labs has a number of tools at their disposal to optimize these transfers significantly, depending on the customer use case. If a large customer wanted to work with the company to manage moving huge datasets around, the company would certainly engage with them to find the solution that best fits their needs.
Wouldn’t local storage always be faster? Fastest would be the drives inside the server, slightly slower is a file server in the same rack or datacenter and slowest is anything outside the datacenter where your VM is.
IMO Storj and similar services are only useful if the data is accessed from multiple different locations. Otherwise, just use whatever your datacenter has, since it will be fastest and/or cheapest.
Currently I’m managing a little over 130TB of StorJ storage in different data centers around the globe which has been mostly a “hobby” for the last 5 years. But giving me a 50 to 75% haircut on earnings will definitely have me looking at changing to some other sort of gratification. At the current level there was at least some sort of reward. Please don’t trim more than 25% off especially for the older nodes…
some sort of conspirology: looks like high incoming \ outgoing “test data” last month is generation of extra FEC to mitigate massive SNO quit on payouts decrase ?
Test data is on test satellites. That hasn’t changed. There are some challenges and fixes going in to manage the free tier better from abuse. Once that is in place, we’ll get a better understanding of normal data flow. I suspect some data will fall off after a time as some terms of use abusers are removed from the system.