Different classes of storagenodes

Hello SuperBoy (I watched your old Blender tutorials on Youtube :wink: )

I also give my two pences, this is actually a suggestion. We’re all different individuals with different objectives and motivations for being a node. The first one, for a lot of us, is obviously the benefit taken in token, but for the others, it could be a hobby for enthusiast nerds, and also for “archiving” reason ultimately. One of my motivations to be in your network is participating to our Internet of Tomorrow: decentralized, open-source mainly, secure, resilient against climatic disasters and wars, and a bit more independant and free than before. Thus, because not every S.N.O. is focused on the money-part, I suggest something possible for reducing costs (at least SNO side) and develop more plans and use-cases on Tardigrade for a larger target.

Make like three options that you can check or not in the GUI Dashboard or in the config file:

  1. The Top-Notch premium current service with same (or almost) prices for storage and egress.
  2. An option for archiving/storage for a personal use (with a lower SLA): in that case, we pay only a fraction of the storage and egress currently earned by the current plan. Tardigrade-side, you align your prices to be competitive with the others on the market for a personal use/storage, with limitations in redondancy or uptime if necessary (in that case, we could even imagine S.N.O. not filling the high demanding conditions of the SLA for the current service, with higher downtime, slower connection around the entire world, and overally slower node to be used for these customers, instead of disqualification (based on a lower SLA to define). So, these nodes could still join and participate to the network but satellitte will exclude them from the first plan and disqualification still apply if they fall below the maximum defined downtime for this plan.
  3. The “Free” plan; for non-profit, humanitarian organizations, and Internet archives team: in this plan, the SNO give way their storage for free, like a donation/contribution for a common cause, like COVID-19. Tardigrade-side, it could be not free but at minimal non-profit price to maintain the infrastructure (or maybe also free and absorbed by the other plans, or even deducted from the SNO held balance if they check and accept that option ?). With this kind of option, we could also join causes like Internet Archive (or projects like the Archive Team: archiveteam.org).

For each option, the SNO could enable/disable and assign storage (by percentage of total, or more precisely by units, a bit like the Humble Bundles where you can select a portion for the developper, and for non-profits). Tardigrade-side, you assign also priorities on the traffic for maintain the same quality of service for the current plan : 1 > 2 > 3.

4 Likes

I second the idea of different storage classes :+1: because it is the reality on the demand side.

I wouldn’t rate it with different SLAs but more with different temperature (data can range between :hotsprings: hot :hotsprings: & :snowflake: cold :snowflake: ).

So the very first debate is on the classes list/purpose/objective. And there is the next (big?) issue of the payout (or lack thereof) for cold storage classes…

As a SNO if my node is hosting 100% of cold data that gets deleted over time I will stop operating that node after 15 months…

2 Likes

Thanks. :+1:

Yes, that’s why, in my idea, the SNO would be able to assign a percentage or units for the three options. So, you would be free to disable this allocation, or be only forced to use a minimal pourcentage (to ensure a fair amount of storage space for these customers). Or keep the SNO free to choose, but the options are then frozen for a period of time (like one month). And finally, it could be a simple ENABLE/DISABLE option. For the selected options, Storj manages dynamically the node to optimise the distribution & filling equally depending of the ressources needs. From Tardigrade-perspective, it would be also easier to allocate dynamically the pieces to optimise the distribution and filling smartly without giving the choice to SNOs, but it would result to a lower/random payout depending the “customer class”, and I guess they would finish to lose more SNOs if they don’t have the option to choose themselves or garantee a fixed storage of their nodes. You can also separate them entirely of course, but at my opinion, its a better idea to have the ability to assign in one dynamic single node.

I think the “cold storage” could open the door to a lot of new slower nodes (hardware, limited connection mostly), opening the availability for a lot more people around the world wanting join in the cloud, who couldn’t join the network normally (in that case, its better than nothing to operate a node), but I think also that a few nodes would accept to assign storage if they have a lot of storage to share, but not enough pieces/traffic from the main core service (its highly probable SNOs will get frustrated if the testing traffic stop and they realize they’ll to wait months or more to fill their drives with real customers data (in the current scenario, Storj continues to incentive to avoid that .

But its only a general idea, obviously !

My own personal reason is, if Storj gives me the choice the assign myself in less paid/free storages options, I would assign something like 10% of my space for opening a fair access to individuals customers (like if it was myself) in the case where I’ve a lot of free space, and maybe a little 1% for free for the “good of humanity”, and eventually make us participating in the internet archives project. So 1% couldn’t hurt a lot in the traffic, considering the satelittes could priotarize the traffic, but it could be a decent help at large scale. However, I know Storj is already helping open-source developpers and COVID-19 research centers in their own model, so that’s already beautiful if the model works. An official “free plan” would be just another way to open their door to more general interest programs.

I guess Storj/Tardigrade will deploy in more specialized services itself if the rocket is launched into space and approaches close to the -golden- sun. Time will tell.

As for me it will be 0% for last class :sweat_smile: but this added parameter would make things more complicated for a new SNO.

Tbh with you: I think this was just a marketing stunt… It probably attracted 0 researchers~ but hey: it was good PR bec it did attract you! :innocent:

Those are clearly weeded out (it is in the whitepaper) so I don’t think there is any interest in them…

Anyway we don’t decide anything in here.

2 Likes

We’re still free to put some ideas in the forum, making some exchanges, and see what happens. Seeds will grow up eventually. :wink:

Almost all of the requirements for SNOs are related to reliability of data storage or availability of data, Both of which you really can’t compromise on even for lower tiered storage solutions. If you allow more down time it comes at the risk of files becoming unavailable. That’s really not an option. The only thing that remains is allowing for slower connections. Which would likely be pointless, because the minimum requirements don’t really matter there. What matters is competition with other nodes. Slow nodes would still not win that competition as faster nodes with enough space would likely still join the lower tiers as well. The way storj works is simply not optimized for the cold storage scenario. Data is available at high speeds because of how it’s spread out among many nodes and transfers are highly parallelized. No change in requirements for nodes is going to change that. So I just don’t see this being feasible.

I do however like the charity oriented option. A specific satellite for those organizations that SNOs can opt in to offering space and bandwidth for would be awesome. Storj would have to keep a curated list of which charities and non-profits are using it so we can decide whether to join the satellite or not. But the service almost by definition has to be the same quality as the full tardigrade network. The only difference would perhaps be the number of nodes participating.

I should add btw, that this satellite doesn’t necessarily have to be managed by Storj. A third party could run and manage this satellite as well. It almost sounds like something archive.org may be interested in running. They could start by using it as their own storage backend. I’d also be very happy to offer the EFF free storage if they would start their own satellite. I think it’s starting to become time to think of this as a less centralized approach. They can run these things themselves. I’d love to see one of these types of organizations pick up this idea.

4 Likes

This would require the ability to limit data storage for each satellite. Maybe I would not want the “charity” satellite to completely fill my node…

2 Likes

I would like to add something too.
If we are talking about premium nodes, lets give the bandwidth of 100% without any without any prioritization on the network where the SNO should not count on doing any other things rather than setting up the server and being online 24/7. And for this type of plan lets not set any mirror because the SNO is already investing a lot of money for setuping up the server.This gives the SNO the liberty to set their mindset of saying that at least we are earning money a lot.
And for the second plan, let’s give add some automatic prioritization with the mirror, so that the SNO can also do some stuffs along the operating of the nodes. There is a high risk of loosing the stored data because the investment is low than in the premium one.
Setting up the mirror may cause low Return on Investment (ROI), but lets face it the least we can do
I am sorry but I do not have any ideas for the last one.

There is no mirrors


I am eager for other classes of storagenodes both as a potential tardigrade consumer and a node operator. I have tons of extra disk space and bandwidth for 8 hours a night, but don’t want my bandwidth effected during work and gaming hours. I want to use Storj for cold storage of backup data and am not concerned with hot object storage availability. I would be fine with my data being stored on nodes with similar availability to my own. I totally realize the team may have very good reasons for not wanting to divert effort to support this sort of use case, however I think its a very exciting opportunity and something I’d be willing to throw some hours into helping somebody work towards.

Im confused about what your offer is:
Are you offering storj to run a storagenode for only 8 hours a day?

If that is the case I hate to break the bare of bad news why would anyone wanna do that… So people can only have a window to access there data for 8 hours at night and then after that no more access. What if someone is in a different time zone SOL?

2 Likes

Blockquote
So people can only have a window to access there data for 8 hours at night and then after that no more access.

No. Node owners describe availability contracts. The storage network manages this by replicating data across multiple node owners. This could even be abstracted away from the network by the concept of node pools that offer guaranteed availability by managing and grouping node owners with complementary availability.

QoS is a thing… aside from that it’s really not the bandwidth utilization that makes internet connections slow and unusable, it’s the number of allowed connections pr client
so depending on how it’s setup and how well your network / internet is running you shouldn’t be able to feel that there is other stuff on the line…

just like your ISP’s connection isn’t slow because its actually shared with 10000-100000 other users.
unless you have a very limited internet connection, then you won’t be able to use a 1/10th of the bandwidth anyways, and with correct network setup you would never know you had other stuff running.

not even while gaming…

the true problem is when you got 999 connections open and then open another one… then you get 1/1000th of the bandwidth if all the connections are active… which is nothing in most cases…
however if you got 4 connections open and you open another one you will have 1/5th of the full bandwidth, this is why to many connections basically kills networks.

There is no replication any kind