No, it shouldn’t. You wanna money for saving files that nobody can download from you? ))
You mean I should disable download completely?
As you wish - disqual will solve your problems ))
I would allow GET_AUDIT of course…
ATT won’t get annoyed. They are the best. But it’s a low bar to clear — to provide agreed upon service. Even Comcast would not get annoyed. My monthly utilization is 30-40TB/month for couple of years now since the virus. And believe me, im the most annoying customer there is — I cancel service every two years and sign up again to continue gettin promo rate for another two years.
But the whole thread is surreal. Traffic is what brings SNO money. Why anyone would want to limit it?
ISPs have to adjust for increased traffic, not limit it. Networks gain more bandwidth each year, devices get more performant, internet is almost like the air we breath, multimedia and AI is already a common thing, almost everything we do needs internet, traffic, bandwidth. So they have to realise all of this and have to addapt, upgrade, offer better and faster services. There is no way around this. The 1Gbps home internet is almost a standard in every developed country. Soon it will be 2.5Gbps. So what are you complaining about? I don’t see a problem here.
I would agree that the best option is to just use what the OS reports. Not being able to break it down per-satellite is an acceptable compromise. But displaying an actively misleading value for the overall usage is worse than no graph at all.
I use dedicated zfs dataset for a node, so I would be an ideal candidate for this functionality, as df
or zfs list
will tell me exactly how much the node is using. But if I’m no longer able to get correct data out of the multinode dashboard, then it’s a net negative for me. I’m really wondering who the target audience for the feature is. In the bug report, the second screenshot even shows the correct total and free values - why not just assume that used = total - free? This seems like it could trivially be “fixed enough” purely within the frontend.
Then you can easily check with zfs what’s usage in your dataset.
The dedicated disk feature was not designed for the Storj Global, it’s mostly for Storj Select, where nobody cares about nodes’ dashboards, because Grafana is used for that.
Less checks and calculations then faster the node.
So, I believe this will not be implemented, unless the Community would suggest a PR with a feature flag disabled by default.
B/W continues, very much like last years’ testing, but with very short TTL.
But yay, nonetheless
2 cents,
Julio
39% of my incoming data is with TTL now. Unfortunately it is 7 days not 30 days as anticipated in the test.
Yes, 7 days now? So noted, but better than it was at first (less than 24 hours). This was the basis of my earlier post re: TTL,
Imagine 10 of these guys with 50 GB links, using TTL of 24 hours or less, as this guy was a few days ago. Crunch those numbers and with only 3-4 Peta Bytes, by my back of the napkin-in-my-head calc: they’d saturate the network, while representing less than 10% of the current network size. Allowing, less than 24 hr TTL is a very bad idea. Haha… the docs even suggest you can set a 300 millisecond TTL… lmao Who wins in that case? Nobody. Even if Storj gets miniscule segment fees, congestion would be a serious problem; not to mention an attack vector. Better to limit that unrealistic expectation upfront, than to p’ off clients later, or be packed with daily rotating TTL porn videos. Hahaha.
2 cents,
Julio
Congestion with 27k nodes? I really doubt.
Min TTL probably is not a solution. Imagine someone (competitor/ransomwarer/hacker) want to attack network. He uploading huge amounts of data (even without TTL), but deletes every uploaded object just one second after uploading.
What we will have then:
- networks of many SNOs are saturated or close to that
- HDDs of SNOs are busy with recording of new data and trashing old
- satellites are busy with bloom filters calculations
- SNO’s HDDs keeps data at least few days before get fresh bloom filter + 7 days in trash, but SNO will be paid just for 1 second of data keppeing (~0.000115% of fair amount)
- kilowatts of electricity is consumed to process all this
And this all will costs almost nothing for attacker. He will pay only for 1 second storage for each piece.
So I see possible solutions to make such attacks costly is to make min paid storage period (like AWS does for cold storage - you can delete your data after 1 second, but you will pay for a few months anyway) or make ingress paid too (probably much cheaper than egress).
exactly the same thing?
what about all the walkers?
If I read this right, +1ns
is possible? S3 Compatibility - Storj Docs
Does satellite calculate bloom filters for TTL data? Asking because I don’t know, theoretically it doesn’t need to…
I agree with pretty much everything you just said. I rather poised myself, and my comments herein, to assuage the defining political environment prevalent here. So kudos to you, my sentiments. There are many rather obvious additional problems building, that they should be demonstrating a proactive and predictive mitigation thereof, which they are not. Or you know, it’s just all in my mind. Nonetheless, love it here - I really do. Laverne & Shirly forever (I think I’m high now)
5 cents,
Julio
Yeah… that’s straight up lunacy. lol!
They got a handle on that a little less than a year ago, I believe.
2 cents,
Julio
Break out your napkin!
Of course not. It cost nothing, because it’s TTL of the database record, so handled by the database backend automatically.