"At the beginning of the year, we took a look at the Web3 landscape and were concerned that most minters of NFTs had no clue that their NFTs could disappear.
The most common misconception I’ve heard is that minters think NFTs are stored on the blockchain. In reality, however, only links to NFTs are stored on the blockchain—the actual NFT is just sitting on the interwebs. They’re either using traditional Web 2.0 ways to host content or they are adopting a more decentralized model with Web3.
As NFT creation races on, IPFS pinning providers are taking shortcuts with storage by using centralized providers like Amazon’s S3. We think we can do better with our pinning service. "
Hey, what is the difference between STORJ’s pinning service and Arweave or Filecoin permanent storage? Filecoin permanent storage isn’t live yet but I suppose it’s gonna be run on IPFS with Arweave like economic incentives for perma storage providers.
Why would any NFT project choose STORJ pining service over Arweave (already established for that purpose)?
That is really a good move! Congrats to the team.
Storj is moving forward in Web3 area and it’s a good thing for every actor in the ecosystem:
for end users, because they can benefit from Storj technology and capacity easily. It’s definitely a good move for user experience, so for user adoption.
for Storage Node Operators, because it will bring more demand, meaning more data and more revenue.
How is the smart contract integrated with Storj network? What’s the role of this smart contract exactly? Is it just about being able to pay for storage directly on-chain?
I don’t get the real meaning of “Permanently pin with Storj”. With Storj Pinning service, your content is stored (on Storj network) and available as long as some pays for it, right?
With Storj Pinning service, after storing your content on Storj network, you’ll get a link like “https://link.us1.storjshare.io/raw/…”. How is this supposed to be permanent? Indeed, if the domain “storjshare.io” is not owned anymore by Storj, then my NFT URL is dead and I can’t change it since it’s written as is on-chain… [EDIT] This is not true. What I described is the result from this tutorial but it seems it’s an old way to store NFT on Storj and doesn’t use IPFS at all. Storing NFT content on Storj Pinning Service actually looks like this.
As a creator who pays to store his NFT content on Storj, how can I be sure that the content will indeed stay available for the duration I asked? After all, Storj pinning service is still “centralized” and managed by Storj company (which may close one day).
I found some interesting information about the architecture of this Storj Pinning service: EasyPin: IPFS Pinning Service With Smart Contract | Devpost.
I don’t know how official is this article but it seems legit since it has been written by Storj employees. I don’t know if the project presented in the article is the official one and the same as the one in the announcement. So far, it is just a Proof Of Concept and even if there are some drawbacks, it is very promising.
@BrightSilence I think it gives us answers about how is the smart contract integrated in the service. In a nutshell, the smart contract role is to:
receive STORJ payment for a specific CID to be pinned
keep records of how many STORJ has been paid for a specific CID.
Storj Pinning service itself (“Storj easypin service” in the diagram) is actually hosted off-chain and listens to on-chain events to know which CID has to be pinned (meaning: someone transfered some STORJ to the PIN smart contract for this CID).
So in my understanding, I can make the following conclusion: Storj Pinning service is off-chain and then a Single Point Of Failure. If Storj Labs closes (bankrupt or service shutdown) permanently or temporarily, you’re not guaranteed that your CID will still be pinned and available. Even if you paid upfront for the content to be stored for 50 years. Though, even if this service is not fully decentralized, it provides some advantages over existing solutions:
payment and IPFS storage automation: since the payment and pinning requests are done on-chain, any NFT project could implement this logic in the smart contract itself. NFT developers could even implement a storage fees logic in the smart contracts: each time an NFT is sold, a % of the transaction could be sent to the PIN contract in order to extend the storage time
Pinning service itself (“Storj easypin service”) is centralized and managed by a private actor but the storage itself is very redundant and secure, compared to storage used by other pinning services that rely on AWS or even local un-redundant storage infrastructure.
[EDIT] This prototype stores all the IPFS CIDs on-chain, only the storage / pinning is off-chain. Therefore anybody can check any of the pinned CIDs and monitor if the past pinning are done. (thanks to @elek)
The project is created by Storj team members, yet it is not productized at this moment. We are still soliciting feedback. You have really good points in this post. Thanks for sharing them!
Does it have to be an NFT? I imagine there should be a market of «pay once, store for X years» data. For example, many jurisdictions require organizations to store their tax or legal data for many years just in case it would be needed for an audit. Allowing them to budget it once as CAPEX, as opposed to monthly as OPEX (and risking accidental deletion by forgetting to pay a single bill) might be useful.
It’s an interesting idea. Although data storage, in theory, should get cheaper over time. So, paying a 50 year storage fee today wouldn’t necessarily be as good a value as paying for storage over time. Not to mention, technologies change. Imagine what someone might pay for 10 megs of storage 50 years ago versus today.
You can already prepay for service if you choose STORJ as your payment method. Just calculate how much storage space you need for your data and how often you expect to have to access/download your data again, then deposit sufficient STORJ to cover the number of months/years you would like to preserve the data for. Added bonus: 10% will be added to your balance if you pay with STORJ, and you can always extend the time by topping off your STORJ balance.
Regarding expected future costs, I don´t think that will be a problem either as billing will still be month by month. You are not actually prepaying for a fixed amount of storage and egress, but you just will have a balance (in USD) available in your account that will be reduced by the amount you get billed each month. If STORJ were to lower per TB costs in the future, your balance would just get reduced slower than before at the old prices.
CAPEX vs. OPEX is a tax decision for some companies. Besides, sometimes they have separate budgets and prefer to use one instead of the other for political reasons.
Also, yeah, the feature would need to be priced to take into account the risks and value of long-term data storage. It can’t be exactly time×amount. I recall there used to be a service that would basically estimate the rate of depreciation of storage hardware and allow one to pay for long-term storage, hedged with annuities to cover inflation and such. All to make a competitive offer while taking the risk off the customer.
Then there will still be the risk that someone within the company uploads more data under the same account and that data will eat the budget. The feature here would be that some funds are specifically locked to fund storage of specific pieces of data.
You can control at a granular level who has access to which of the company´s Storj DCS buckets and for how long by setting restrictions using our Access grants management options. You can also prevent certain users from adding new uploads by not giving write access.
Please read how Access Management and Access Grants work with Storj DCS in general.
I would hope that you would not share the Root API Key Secret to the entire project to give wildcard access to the entire project.
As stated in the section about Access RestrictionsCaveats can restrict whether an Access Grant can permit operations on one or more Buckets.
You can learn more about how to use access restrict flags with Uplink here.
I’m still missing the part where the budget tied to a project can’t be accidentally drained by some new access grant created later. If this is just «never share the root access grant», then you’d need a lot of separate projects to maintain.
Please indicate what is the scenario where you need to give multiple users full access to a project, rather than restricting their access to specific buckets only? Note that users could create prefixes within a bucket that can be used similar to subfolders (these are really prefixes as object storage does not work with folder/subfolder structures).
Also note that a bucket can hold data of many users, but each user will only be able to list and manipulate the data that their own access grant allows them access to. If they do not have the encryption key to the data that is associated with another user´s access grant, they will not be able to see if other data even exists, read, copy or download it etc.
How exactly would a user be able to create a new access grant themselves if they do not have root access to the whole project? You can restrict the users from writing new files or downloading files from the bucket they can access, or restrict the time during which they are allowed to do so if you want to limit how much usage charges they can generate.
Furthermore, you can reduce the usage limits for a given project so that when the storage and/or egress limit has been reached, no new data can be added or existing data downloaded until the limits are raised again (and there is still enough balance in the account to pay for the higher limits)
I’m not indicating that in this scenario multiple users would need a full access to a project. I’m saying that with this solution you’d have to have hundreds of separate projects per user.
In a service like this the user wants to not care about storage costs ever after uploading it. They want to pay a fixed price to store this 100 GB set of data for the next 10 years, making it sure that they will not run out of of the budget allocated to storing this specific 100 GB chunk of data for the next 10 years. Given that budget is allocated per project in Storj, they’d need for each such separate set of data to create a new project and pay the balance required to store the file for the required (e.g. by law) amount of time.
Consider, let say, a photographer who offers a service of archiving photos from a wedding for 10 years for a fixed upfront price. Each weekend they go to a different wedding, make tons of photos, edit them, then put the whole session to the cloud for storage. They’d love to just pay once for the storage, as opposed to be reminded of bills for each past wedding they were at—and not accidentally run out of funds to store past weddings when uploading a new one. They’d love to just make sure after paying those 50 USD that would cover Storj costs at the current prices, this one specific photography session would stay there for 10 years, even if in the future they’d accidentally upload too much data for other sessions that would eat more budget than they allocated for them.
Please note that there is no such allocation of ¨budget¨ per project. Whatever balance there is in an account will be available to cover usage on all projects in the account. You can set usage limit per project but this is independent of how much funds are actually available in the account.
I am still not following how there is a necessity to make new projects for each data set. The problem is most likely more an issue with granularity of billing currently available to the customer, the invoices will show a breakdown of charges by project and what is needed is to also offer more granularity so that the customer could see which buckets had generated what amount of storage and egress charges. Afaik this is already being internally discussed for possible implementation, as it is a feature that will be useful in general for many customers, not just this specific use case.
Currently the customers would have to themselves implement a way to keep track of bucket storage and egress usage.
Ah, sorry then, I somehow thought there is a specific budget per project. Then indeed so, even having multiple projects won’t help.
Indeed this would be nice and a good first step here.
I’m slowly starting to think that maybe this is not a feature Storj would be in position to offer, instead it would be a feature of an application built on top of Storj. But then, IPFS pinning sounds like a very similar thing.