Is there a way to set limits and quotas?
I see projects have limits. Is there a way to set limits on space and bandwidth on a lower level? Like per bucket, or per encryption key? Also are stats available on consumed resources on that chosen level?
If not available out of the box how would such limits be programmatically implemented, modified, monitored and enforced? Currently it seems it is only possible via projects?
The custom limits are supported only on the project level.
You can get a detailed report usage down to buckets (Billing → Detailed Usage Report). You can also see this info in the list of your projects on the Billing page, it’s grouped by project but can be expanded to request also usage per buckets (Detailed Usage Report).
So, the lowest level is a bucket.
So the only way to have different limits set and enforced by the system is to create different projects?
So when I have 1000 users and I want to assign 5GB/5GB to each, I create 1000 projects?
that way that reports are available on a bucket level but limts are not. So I could not effectively set a limit of 5GB/5GB on bucket A and 10GB/10GB on bucket B. But I could set 5GB/5GB on project A and 10GB/10GB on project B.
Yes, the custom limits are supported only one the project level.
If you would allow users to store their data in your buckets, you would also implement a billing and of course limits, if you need them as a functionality of your service.
We didn’t have a feature request to implement also custom limits on the bucket’s level. If you have a use-case - you may contact sales and solution engineers to discuss how it could be implemented.
The use case is if you have several customers or users who will upload and download data. If you set limits on the project level only, each user could consume the full limits. But this might not be what you want. You might want to allocate fixed allowance to each user that he will be able to consume. This would be easy if you could create a bucket, set limits on the bucket and let the user use it.
Other way would be what has been already in the question mentioned here: Storage Limits, Sub-users, etc with sub-users. With this you would be able to create a user or usergroup and assign the limits to it.
Maybe even limits assigned to specific access grants would be doable?
This would give much more and fine-grained control over allowed resource consumption because I think it is not really feasible to create 1000 or more projects…
You would usually implement multitenancy on top of object storage yourself, so that you can flexibly control and account for, let say, sharing data between users. Why do you need it at the object storage level?
How would to implement it, if usage data is only available on bucket level?
If you use the approach
How would you monitor the usage in terms of storage space and bandwidth consumed, temporarily suspend up- or downloads and lift suspension?
Also as specific features like geo-fencing or versioning are tied to the bucket level, you need access to multiple buckets if users should be able to access those features.
So monitoring usage only on bucket level does not seem sufficient.
Eh, in AWS you’d be able to tag your objects to track billing per tag, or collect data from CloudTrail, or use S3 Server Access logs. Here indeed you’re limited to generating time-limited links and tracking how many you have created as an approximation, or leveraging a middleman like a separate CDN to count downloads for you. I don’t see any option to do per-tag accounting?
That’s a simple one. You set up one multitenant service per region. And if versioning is an opt-in feature for your users, I assume the core value of your product is storage itself and you want to reuse as many features as possible directly from Storj.
I guess I am too used to building products with core value that answers non-IT needs.
Storj supports a metadata too, so one may use tags too. However, we do not have a feature to mark a billing by these tags, because they are encrypted too, so for the satellite they are a random data.
We have a different level of tags, which are not encrypted, like the value attribution, so this probably can be used to account usage even per object, just not sure that it’s exposed directly.
Can you clarify what would be your approach or solution:
A is Storj account holder and admin. Must not have access to users data it is all zero knowledge.
A provides bucket B1, B2, B3 with different feature sets (e.g. geo-fencing)
Every user gets his own S3-compatible access so they can upload and download to and from B1, B2, B3 as they like but not see the other users data.
Admin sets limits for each user like U1 5GB/5GB, U2 100GB/100GB, U3 1TB/1TB. Limit means the user can exclusively fully use these limits for space and bandwidth.
Once the corresponding limit is reached, either uploads and/or downloads of the respective user only must be suspended.
Suspension may be lifted manually/automatically upon even E (could be anything imaginable)
A must be able to monitor resource usage per user as to what space and bandwidth U1, U2, U3 etc. is using, e.g. for billing etc.
And there is one more thing why I thought project per user would be the most promising approach:
Projects may be migrated from one Storj Account to another making migration or offboarding responsibility for data management and billing from the MSP, VAR or Storj Customer directly to the Tenant for offboarding or other purposes.
I don’t know if I understand it correctly but it sounds like projects could be moved between Storj accounts? So if at one point U1 would have his own Storj account he would be able to migrate “his” project to his own account easier than if it was done on bucket or object key path level?
To specify any limits you have two approaches: either periodically calculate a usage for each bucket/prefix which you distinguish as different customers or place a proxy and calculate it realtime, especially if you want to use S3 credentials, so traffic can be calculated directly, not based on nodes reporting, then you would need to correct it anyway, because some traffic maybe not confirmed by the nodes.
It’s possible, but not in all cases, some would require reupload, especially if you need to re-encrypt it or use a Storj-managed encryption instead of manual. So, on case-by-case basis at the moment.
But if users use the S3 gateway how can there be a proxy to calculate usage?
And with S3 credentials and Storj S3 gateway usage I don’t see a way to get around this:
One challenge that can surface is that the activity for your S3 keys is unlikely to be evenly distributed across your tenants. In this model, you will need to separately monitor and meter usage per tenant. Storj does not aggregate usage information at the Object Key Path Prefix level.
I see that it might be possible to get the space per user on a object prefix level with ls or something, but the consumed bandwidth?
Of course, only storage usage. But you may request also a rollout information too with the code via API, but you need to contact sales and solution Engineers teams to design your solution in details. Your request as much specific as possible, so, I wouldn’t be able to provide you with a complete solution.