Do you think bandwidth limits will impact file availability? When the bandwidth limit is reached for a node, all pieces that node is storing are effectively unreachable for the rest of the month (no more downloads to stay under limit). If the majority of nodes set a bandwidth limit that is low enough to be reached, it might pose a problem.
Another scenario (not likely to happen, but a nice thought experiment): more than 20TB egress is used on average on the entire network in one month. Only the nodes where the limit is increased above the default 20TB would be able to send or receive pieces. If a large amount of node operators never set it higher than the default and also don’t actively monitor their node and increase the limit later, the network would be partly inaccessible until the next month.
I sent this one in shortly after the previous town hall meeting, bug it may since have gotten lost under a pile of new emails.
I’ve been dealing with the implications of GDPR at work over the past few years and was wondering how you deal with the requirement of specific data processing agreements with any third party that stores or processes data for your customers?
Since nodes are essentially third party data processors, doesn’t GDPR require you to have data processing agreements with all node operators? This could be a requirement for companies to be able to use the storj network for storage.
Recently the CCPA has been added to this with less stringent but similar requirements for dealing with “service providers” or “third parties”. Whatever node operators would be classified as. I realize that neither of these regulations were written with decentralized storage on untrusted nodes in mind, but you’ll still have to deal with them.
This is an interesting question. I wouldn’t say nodes are data processors. The files are encrypted client side so you are guaranteed not to have any personal data (unless someone develops a rogue client that can upload unencrypted files? Is that possible? What if someone uploaded files that are illegal to own?)
On the other hand, since the uplink communicates with storage nodes directly, you do see which IP addresses are connecting to you. If you decide to log that and run some smart data analysis thing on it you might be able to get some useful data out of it. In fact, by default, IP addresses are stored in the log file when an error happens. Does this count as storing personally identifiable data?
Also, how would a lawyer with possibly not the greatest knowledge of technology look at this?
It’s open source software. It’s definitely possible to skip the encryption and even to tune the RS settings and craft pieces in such a way that in tact files end up on nodes. That said, that takes specific effort from the customer in order to do that. This may be more of a security concern than a legal one.
The problem is that these regulations don’t have exceptions for encrypted data.
I know Storjlabs has lawyers looking into this stuff and I sent in this question originally back in November. Perhaps they have thought it over since then.
What are the plans, if any, for a total Storj network overview service/web pagesuch as total network capacity, utilization, anonymized SNO information such as average/median utilization and capacity?
What performance metrics can customers expect to see in terms of requests/minute, sustained throughput, etc. upon production release?
Will storage capacity still be gated after production release like it currently is in order to balance supply and demand? Larger SNOs still have a lot of available capacity and it would be good to know what are the limiting factors for that capacity being used, whether it is an artificial limitation to moderate data ingestion, to keep a certain threshold of nodes with sufficient capacity open, or if there are other reasons.
As asked by @Cmdrd, I’m wondering what maximum performances the network could reach: let’s imagine for a minute that a colossal client (like Steam/Valve for instance) were to decide to put their data on the Tardigrade network: currently they do provide incredible bandwidth performances (they easily max out my 300Mbps (37.5MB/s) download connection speed).
On this thread Got my Tardigrade invite. Decided to run a couple of speed tests@Pentium100 is even talking about a sustained speed of 480Mbps (60MB/s).
Could the STORJ network compete with such speeds one day? Or even higher? Is this foreseen?
Will or when will an effective way be implemented to notify SNO of any change in the terms of service? Information on the dashboard or email to SNO?
For example, I am surprised that the minimum requirements were 500 GB, and now I see “A minimum of 250 GB of available Space per Storage Node”
Someone changed it once, but the information about the change did not reach me as SNO … and I cannot read TOS every day.
I would like you guys to consider a change in policy.
I recommend that you change your policy from one storage node per IP to two.
We SNOs build multiple storage nodes for the purpose of redundancy. Just like implementing a storage node fault tolerant solution. We want to keep supporting the effort but sometimes things happen and one node goes down. By changing the limit to two, we wouldn’t panic as much because we know we’ve got a backup running giving us time to get the first failed storage node back up and running.
I know you guys are really focusing on decentralization and spreading out the data globally…but from the storage node operator’s perspective, two is better than one.
Not trying to increase my earnings…just trying to keep things up and running.
It not split same ammount of data, you just can get only 1 piece from each file, not more for both nodes.
As you like 1 node. when clients will be more, it will be not so relavant, as pieces will be lot more than now.