What will the system do if the storage has exhausted its space, but is there still a lot of traffic? only work on file distribution or, given the channel and uptime, be replaced with files that are more often downloaded? to the benefit of the operator.
because right now the distribution percentage is only 10% of the data loading percentage. I understand that the node is only 4 days old, but such information is hard to find. and I understand that this is alpha)
Seems you don’t understand how it works and thus I don’t understand what do you mean by “distribution”, based only on your local stat.
If your node would be filled up, it will not accept any new data. The data could be deleted only if customer will delete it or if they specified the expiration date for their data. Otherwise your system will keep this data. Of course, it could be downloaded back by the customer or by satellite for the purpose to repair the lost pieces and upload them to the other nodes.
@Alexey
Леша, Леша, Алексей)))) Дааа… такое бывает, читаю английский не плохо, а вот с речью и правописанием все очень плохо без практики)
I think I have problems with the translation, but I understand everything correctly and heard the answer. for storage 1tb storj paid 1.5$, but for data transfer to client (when client download his block from your node) for 1TB storj is paid 20$, I think it is not necessary to explain which is more profitable for an operator from a country with unlimited Internet, and I mean not 100TB, we just don’t count traffic permanent). though I can give out more than 1TB per day. Then where does the communication channel, the traffic in the calculator, if everything depends on whether the client will use them.
And then the question is why there is no relevance between the nodes? after all, if my node has an uptime server 99,5% and a 1-Gbps channel to neighboring countries (the world too, but ping), I still will not start earning because the client simply uploaded his home video that only watches when celebrate wedding…
From logical as for me it remains only to raise the node by 20TB + to ensure 100% variety of clients.
As I understand it, in my case - how lucky. If 1.5TB loads data that customers download every day, then everything will be fine, if it is just a storage, then this is a penny.
Yes, it’s random. Or almost random. And this is true, if customers would not download their data back - you will earn only $1.5/TB per month for storing their data.
Yes, there are countries which doesn’t have a cap of bandwidth. Of course, the bandwidth will be more valuable for operators from those countries, because it costs them nothing.
However, the price of storage is falling, but the price of bandwidth has not changed in the past few years.
But what that changes, if customer don’t want to download their data back?
What exactly do you offer?
Take a look from the client - how much are you willing to pay for storage and for what exactly?
will not change anything, it is his right (about customer). but we are talking about the system. Why, after the stage of block allocation and after a certain amount of time has elapsed, not allowing those who take this seriously? I have a separate channel exclusively for the server, unfortunately I can not get raids for now, but still.
Customers want to constantly have access to their data and this problem is certainly solved by the allocation of system blocks, but there are those who download data much more often.
So, the school task: for the 1st operator upload data 100 customers who do not download them, the operator has a high uptime and channel, but for another 100 operators have uploaded data that they download customers every day, and their operators cannot give this data at full request. the question is - why should the operator keep high uptime and a wide channel?
Maybe It would make sense to transfer blocks to those who have a higher uptime, who make for this more time and power.
I do not want to be rude, I just give out thoughts as I see it, I am a big fan of storj and believe me, all my friends did not understand my me laught when I said that storj learn how to broadcast live video)
I still do not get what you want to say, sorry.
I don’t see a model which you suggest. For me it looks like a very expensive and slow data transfer from one region to another. For what purpose? Why the customer would willing to pay for that?
To pay more to the SNO without bandwidth cap? But why? Customer will get their data anyway. They have 80 pieces across the globe when they needed only 29. Who deliver faster - will got the payment.
Why your idea should be more attractive for customers than the current model?
Or do we talking about SNO? If they will not deliver uptime and/or reliability, their held amount will be used to recover data to the more reliable SN.
If there would be more SN than needed, the data would be spreading between them and SNO would get less income, some SN would going offline and the data would be spreading between remained SN, thus income would grow, and the system will be stable again.
If there would be not enough resources - the price could be changed to attract more SN (for example - the current surge payout). The normal balance between offer and demand.
as a system and buyer, I understand you. Of course, there is no point in pumping to another region, but if there is a nodes that provide problems, but remain alive, or if there is a number of nodes that can transfer data better, why not ???
I propose an improvement for the consumer, and the fact that the operator will earn more of this consequence. No one will pay for the fact that there is an exchange of server blocks, the one from whom this block is taken and in return is given another who downloads will not be happy enough, but the operator who keeps everything under control will be only for what he has downloaded more and earn more money.
And believe me, I understand that the network is in alpha and the system can easily download anywhere since most of the space is available.
Just explain to me how you can get 1TB download per day? As far as I understand, there are no options, only random. You can make a node with 1.5TB and theoretically have 1TB per day, or you can make a node with 100 TB and have 10 gigs per day.
We’ll see. In 3 days for me uploaded 293 GIGA to me, and downloaded 30 gigs. When I fill the whole screw, I will think and left more than 10% as it was written in the instructions for anything.
If some nodes reached limit of their bandwidth and the number of pieces would fall below the threshold - the repair job will be triggered to allow customers don’t lose an access to their data.
You can’t. It’s unpredictable. And I don’t see a way how to make it more predictable. Today customer broadcast his wedding, but tomorrow will upload only one document or photo to their archive.
If the client wants to download - it will be, no matter where the pieces are.
If they do not want to download - they will not, and again no matter where the pieces are located.
What options do you want to have?
At the moment it doesn’t matter. Because of cutting a long tail almost all data will be given to the nodes, which faster for that customer. So, the data would be distributed closely to the customer.
Only when they would lost pieces, the satellite will distribute those pieces to other regions (or in the same place, it is by chance at the moment, but perhaps it will place them around itself ).