Problem is, next time Storj is asking for capacity expansion because of big business opportunity, they will receive unsatisfying answers…
When you will have old nodes you’ll undertand many things…
Okay I’m just going to use this space to emotionally vent that I feel bummed out after the test process. Due to a mistake that I made but also there might be some recommendations for storj for future test bouts.
My mistake was I bought new hard drives which are now setting mostly empty. Before, I had four hard drives that were mostly full and life was good . Now I have seven larger hard drives that are sitting mostly empty, seemingly uselessly Not to mention the time spent troubleshooting and optimizing things as they broke over the last few months.
(So obviously, I should have reminded myself that this was only test data and it was going to be deleted, and not to buy stuff until confirmation that real data was being onboarded, that’s on me).
From storj’s standpoint, I think the test data may have been ballooned with too much test data, retained for too long of a time. What would have been helpful would have been more explicit communication of:
- we are using test data to increase aggregate used storage by x%
- the inflows are running between dates x and y
- the data is being deleted between dates x and y
some of this could have been gleaned by querying the piece expiration database, but I don’t know how to do that. Because otherwise a simple operator such as myself would just see disks fill up, respond to the economic signal, and add capacity, and then two months later, have the overwhelming majority of that new storage deleted.
Some of this test information was relayed in the forum, but that would require that a SNO actually be reading the forum, and also it was relayed a more loosey goosey non-specific manner.
The total size of the network probably grew a whole lot in the last few months, and it will probably shrink some in the next few as some storage is taken offline.
I believe they still plan to maintain reserved capacity based on quarterly growth estimates: so they’ve told us how much they plan to use. And if they only paused SLC uploads temporarily to allow TTL data to expire for the new customer to use instead… they I’d expect public test data uploads to resume soon (but at a lower rate: as they don’t need to reserve as much now).
( At least I haven’t heard any changes to test-data/reserved-capacity plans? )
To me it looks like:
It goes in the direction, we are frightened about when they introduce storj-select:
Select will replace Public in the long Term.
And now they have deleted Terabytes of Test-Data and there will be no new big customer. So we have lost a lot of used Space (and income) again.
—-
This shouldnt Sound too pessimistic, but it’s always the same the last months: Assumptions and News but as result less income for the public SNO.
It is as it is, but it’s more dissapointing if there is a big assumption before
Overall there is at least a big new Customer on the select network - congrats
And the public Network is much faster and better then before the Testing, Even if its sad News for the public SNO.
You can’t lose something you never had in the first place.
I guess that was the plan when they expected the 10PB customer to onboard the public network soon to make sure there is enough capacity available.
There is no more reason to do so as the customer is onboarding the Select network.
Maybe the big customer can share 1PB from those with the public network too?
Like a cache or something?
I had made a suggestion like that:
It would be dead easy for the Storj sales team to offer such a customer even free storage on the public network, like 1 PB. If they cannot use it due to legal reasons, it won’t cost them anything. And if the start to use it, it is an invest into the public network.
Also as suggested, it should be dead easy to move data from the Select network to the public network. It should not be required to re-upload the data.
This way, customers could try out the Select network, gain confidence and start moving less critical data to the public network and gain confidence on that too.
I think these quotes should be kept in mind:
So this comes down to code reviews, code audits, coding practices, code management, company procedures, management processes, security practices etc.
According to this statement storing data on HDDs operated by individuals might not be the key problem for a customer requiring SOC2 certification.
Cross pollination, a stream of consciousness random musing:
What if the S3 gateways pushed extra (over and above) erasure shards from the Select to the Public network, such that no data could be reconstructed in the public domain alone? Offering clients additional redundancy for a price. And/or vice versa, Public to Select. Mechanisms are in place to restore 7 day trash on both nets, so representing a possibly trivial coding endevor. An additional hardening option could be offered to customers of either network. This could let the public network participate more fully in such deals, and be entirely symbiotic. While say only 1-28 shards could only ever be distributed to the public network. Nothing could ever be reconstructed solely on the opposite network; therefore, not breaking any SOC2 audit standards, but only supplementing resiliency.
If I’m risk management in any F500 co., there is no way ever big corporate data will accept the geopolitical world wide data risk of Storj’s public network, as it stands. But if shard redundancy (additional or not) were offered for the Public net and placed on the private SOC2 that would significantly change the risk profile. ie: any threatened public operator intervention would be entirely mitigated. Additionally, it would allow Storj to capitalize on the significant PB expansion capability of the Public net, and continue to accelerate it’s growth trajectory.
While there seem to be multiple 10+ PB customers in the pipeline, I pretty sure, competitively it’s not currently possible for the limited capacity of the Select to onboard much more at this time (thus inhibiting these new projects until Select operators are recruited further/provide more storage as fast as needed/desired. Or Storj ante’s up the necessary ‘surge’ nodes themselves.) However, Storj could blitz on this concept, as they just did with this last round of testing and improvements and make it a reality. If further additional security concerns come to light, they could likely be mitigated by convergent encryption or the like. Finally, if Storj were to become a SOC2 compliant entity, the onus would be within scope of audit in providing such procedures in facilitating any such security requirements necessary.
Things that make you go ‘hmmmmm,’ in a high intonation
2 and 3/4 cents
Julio
Yes, maybe some thorough thinking needs to be done how to make the Select customers hungry for the Public network.
One idea I have added for vote here: Store 1 PB on Select network, get 100 TB free on Public network
I think one of the most important things would be that customers can easily move their data back and forth between the 2 networks. This way they can always test and revert if they dislike the outcome.
I would say, that it was not just waste. Because it’s not the only one big customer.
See
Just a milestone. Anyway, we wouldn’t likely provide updates about these other customers of the same or less range. The next milestone is likely required
You got it the point.
At least that work helps marketing keep at the top of performance charts! Smoking Amazon, but cheaper… is a great way to attract more paying customers.
The way I understand it, this is not about actual security, but about following the rules needed to get a certificate and those rules may be arbitrary and not make much sense.
For example, I know a company that wanted to see if they could get an ISO certificate. One of the rules (among many) was that if you have cables in channels above the racks, those cables have to be covered for some reason. I don’t know ho that would improve security or reliability, so, I guess visible cables just look bad?
Similar here. Despite using industry standard encryption (and the customer always being able to additionally encrypt the data) the requirement is that the data has to be stored on servers that are in compliant datacenters. Does that make it more secure? Maybe, maybe not. Does that make it more reliable? Maybe, maybe not. But those are the rules. It’s not about not having a small ventilation shaft in the server room, because somebody could send a 1 year old child (an older one won’t fit) or a cat to steal drives from your server.