I am fully aware of how distributed storage network works. I’m not sure that you understand that if someone places backups, archives and audit data into a system like Storj, it is not because they expect to retrieve them, but more likely so that they can have met the basic requirements of a 3-2-1 backup system (Live data, local backup and remote backup). So this data is long-lasting, but unless the owner has a major issue is never retrieved, just at some point in the future deleted.
Storj is now very competitive in this market space. AWS as an example has its Glacier storage tier which also costs $4 per TB per month, but you do not want to use it if you need to recover large parts of your data set if you have major issues as the high-speed retrieval cost is $10.00 per 1,000 requests and high speed means that the requested blocks become available in minutes rather than milliseconds. If you want to access your data cheaply you would have to wait 5-12 hours for the request to be handled.
Years ago a request could have been for a very large object, but nowadays most backup tools are creating 10,000’s of objects/files per TB backed up as they doing things like deduplication of data and so work with much smaller block sizes.
So Storj offers one of the best options for ‘cold’ storage as it has the recovery process of a ‘hot’ storage system, but the pricing of a ‘cold’ solution. The problem from a node operators point of view we hope to gain additional income from retrieval requests, so we want the blocks we store to be accessed frequently.