I have to admit that you are breaking Supplier Terms & Conditions
They are for a reason. Exactly THIS reason.
I have to admit that you are breaking Supplier Terms & Conditions
They are for a reason. Exactly THIS reason.
Yeah, I told I migrated them to separate disk, because I learned, that I am stupid. I am sorry for that
No need to sorry. Our ToS are “written in blood”. We are learned it in a hard way. But seems you was willing to repeat this yourself… Sorry.
I have to admit that I never read Terms & Conditions. ![]()
How am I repeating it? Is running multiple nodes on one machine forbidden too? They have all separate hdds they are running on?
And I never even doubted that this was so.
No, only running them on the same disk/pool.
Maybe write it down here too. That’s the reason I run multiple nodes on one disk, because I thought it was only necessary to have one core per node.
So, you did not read ToS too and blindly accepted them?
Ok, I would add there that you must read ToS before start… ![]()
I think most here didn’t realy read them…
I boarded here because I got recommended and then I just read the forum a bit and the starter requirements and instructions. It ran fine so I left it
Ok
I can absolutely confirm this. For my day job I’m frequently involved in RFP (Request For Proposal) processes for tool selection for SaaS products or other cloud solutions. This tends to consist of documenting a large list of functional and non-functional requirements and reaching out to many providers to get their response on these questions. If we would receive a response that says: “Sorry, but without a signed contract we can’t make any promises.”, that provider would be scratched off the list without a second thought.
The real no no is not accommodating prospective customers prior to making the sale. Many sales will never even get a chance to happen if you would do that. Customer acquisition requires investment and this is a small investment to make to enable much faster future growth. It’s just the reality of doing business in this field.
In that case both parties meet halfway. If I was a supplier and a client came requesting 1 billion lemons while I can only make 10000 a day, I would lay this out in the contract. Maybe add something along the lines of if the client buys 10000 a day, then I could look into getting that increased to 20000 a day.
This isn’t a fairy tale btw. It’s how all normal production works: If a client goes to the factory to request something made, then the factory will either charge more for exclusive access to all production facilities, or request that the client commits to a minimum order so that the factory can safely scale up.
If a client comes and requests 1 billion lemons, I buy/produce/borrow that 1 billion lemons, and the client walks away, I’m left with 1 billion lemons to sell.
That would be analogues to a customer requesting exabyte scale storage. And I’m sure Storj currently would tell those customers they won’t be able to accommodate that.
That’s clearly not the case here. The requests sound like something the network could accommodate in both size and performance. It just requires earlier deletion of test data (which only costs Storj money anyway) and replacing it with TTL based test data that allows them to be more flexible. We’re not talking about ramping the network up to millions of nodes and risk being left hanging.
Additional note: In my experience, ALL providers lie through their teeth when responding to RFP’s and then rush to get close to what you require in the mean time. So you can be certain that all of Storj’s competitors will promise prospective clients the world. As a node operator with focus on long term benefit, I can only encourage Storj Labs to play that game to the best of their abilities and focus on long term customer acquisition, rather then short term node income from costly test data. Customers will walk away at the drop of a hat if they smell trouble, unless you play this balancing game well. And we node operators don’t have the customer information to judge how well they are doing that. But our long term incentives are aligned and I for one trust Storj Labs to make those decisions to the best of their ability.
I agree. In fact I personally think that it would be better if the deletes actually happened faster, so that we know where we stand.
Unfortunately this is not the case that is being relayed here. This mysterious client appears to want to upload hundreds of PBs of data. That fits with my lemon analogy perfectly. Although SNOs (myself included) would be more than happy to add capacity, we can’t justify adding that capacity unless we see the upwards trajectory. And I’m not talking about going up 1PB. I’m talking about seeing the network at 70% utilization. Adding that capacity on the “faith” that it will be utilized, I’m sorry, but it’s a hard pass at least by me. Even if the client wanted to upload those hundreds of PBs of data, and even if storj would be willing to reserve that capacity, and even if SNOs added everything they could add, the bandwidth isn’t there.
Why add the drives and watch them idle? Fill one, I’ll add another. I’ll even personally promise to add two if one gets filled.
When SNOs were recently asked about spare/hidden capacity we could bring online… that sure felt like a potential customer that was large enough for Storj to take seriously who had their own growth estimates. And Storj had to get back to them, perhaps in a RFP, with a reasonable answer showing they had their own internal metrics and involved their community… to show they could match or exceed that growth. In the end it probably just ticked a box on page 7 of some PDF.
Fine by me: like you said that’s the game…
whats the difference on having several PBs filled with test data which is under Storj control (and available in a 7days timeframe) with happy SNOs and having the same PBs already available but with not so happy SNOs? well… some big SNO could decide to shutdown their nodes and their TBs will dissapear for the offer. There is always a risk…
The way I see it, us SNOs should care about Storj’s long term prospect.
Keeping test data in the network and paying out SNOs, while it can be beneficial for both Storj and SNOs in the short term, is not ideal in the long term.
Of course, their initial plan of deleting test data progressively while introducing new customer data would have been ideal, but in my honest opinion the purging of test data is nothing but good for Storj, which in turn is good for SNOs.
Now if the deletion is quickly followed with new ingress, then thats perfect, but I believe its a non-issue either way. Just my personal opinon.
If the unhappy SNOs saw their drives getting filled, I don’t think they’d be that unhappy anymore. If they see their drives getting filled, adding more drives, seeing them filled, then finding out that “yea, it’s not actually data, we need to delete that”, then see their drives lose 10TB per week and gain back 1TB for the next few months, then “oops, turns out the client doesn’t want to sign up”, then they’ll be unhappy. I don’t think there is any unreasonable assumption there.
If the SNOs are happy, see their drives getting filled, then go down to 50%, then gain back 5% per week for the foreseable future, then those will stay happy. Happy SNOs don’t turn off nodes.
Unfortunately direct deletions are too slow for the customers (they need to contact thousands nodes over the world!), so we decided to get the request and delete the metadata on the satellites and then send BF to nodes… Sorry, it’s a business… So your node would collect that garbage and remove it when they could, not immediately as before…
That’s not what’s happening. It’s not faith, we uploads data with a TTL, so it will be removed without a slow garbage collection, but on timely manner.
This is perfectly fits what our prospective customers wants to.
Nobody asks you to add a capacity right now, but you are informed. “He who is forewarned is forearmed.”