Major Bump to Ingress Bandwidth

Instead of using dedicated disk feature, just allocate a value bigger than drive’s capacity, like 30TB for a 28TB drive. It will do the same thing, but with a correct dashboard.

2 Likes

No it will not do the same thing. The idea of using the dedicated disk feature is to spend less time on tracking used space. No need to scan all the pieces on startup. No need to track the size of the trash folder. The node runs faster that way.

1 Like

Me? Never… :smile:.

Yeah, I realised it after posting. And I remembered that it dosen’t take into account allocated over the drive’s capacity.

So here’s the question I’d like you all to chime in on.
This network bump is a perfect example of what has been the set-up (from the testing this time last year, the improvement of the network to facilitate a more real-time live & active file system), and it seems to be working well. The network is responding beautifully to this particular onboarding. This new TTL client is making great use of the network (Yay, right?). Yes we’re seeing activity, great; however, the network is undergoing 8x more bandwidth, on both incoming and deletions - working your hardware. Nevertheless, there is a danger that should be considered in that: use of the network in this manner, will saturate future growth possibilities: permanently. How? Use your imagination, it’s rather obvious. Consider that your gains can be chewed up within 24 hours… - discuss!

2 cents,

Julio

It’s not obvious at all, please clarify.

Performance? Current node resource usage at present load is noise level. Barely noticeable. 10x more throughput and io won’t make much difference. When tests were running last year it was perceptible, but $10 SSD pushed it back to noise level.

Space? There is plenty of space and SNO proven they can make more available. Even I, an average home user, report 10TB free to storj, while I have 60TB of free space. Adding 50TB would be just a config file edit away.

So… what’s the concern you are alluding to?

3 Likes

A clear statement would help everyone understand this unknown hinderance to future growth.

4x + 6y = 99 cents,

</nerdatwork>

1 Like

One concern I’ve is the relationship with ISP. When they sell me an internet connection, I aware that it’s not a dedicated connection and bandwidth for me. They’d bet most users will not use their full bandwidth all the time, like how airlines overbooking work. This would break their model and make my usage look bad in ISP eyes. If it come to that, is there a config to tell that node not accepting data come with TTL under X amount of time?

1 Like

Set storage.allocated-disk-space to space that is already used, there are will not be any ingress anymore. For TTL data - i think storj’s clients should have predictable speed and capacity independently of TTL. What if everyone will not accept TTL data?

1 Like

What is the definition of a dedicated disk?

Do you have a Pros/Cons run down?

Cheers

will saturate? or force to increase capability?
right now there is no press against the wall.
i don’t want to dwell on, obvious things will occur in an obvious ways, which will be only good to talk about, when it occurs, obviously. ;>

1 Like

@Julio, We’re not monkeys dancing for your entertainment. If you’ve formed a thought… type it out :wink: :monkey_face:

Nodes are vastly more capable then the internet connections they’re behind… satellite billing is accurate to the ms (so abuse has a cost)… the beefy internet connections needed to keep Storj busy with short-term files aren’t free… and SNOs can add capacity faster than customers can fill disks.

I lack the imagination to see a problem. If Storj pays me to store ones and zeroes: I’ll continue to do so: regardless of duration :money_mouth_face:

1 Like

That’s the point. If my internet connection is maxed out I can make more :money_mouth_face: by rejecting those 7 days TTL uploads.

I hope Storj never adds the option to let SNOs decide what paying customers do with their files (like when they delete them). But if some nodes do start rejecting uploads… I’ll take them! :heart_eyes:

4 Likes

Great, win win then… :joy:

1 Like

Why should you worry how it “looks in their eyes”? They make a businesses decision to oversubscribe. You make a business decision to use the service within the framework of the contract. There is provision that speeds will drop at high load. Every party involved understands that. I don’t see the problem. Feel free to saturate your connection 24/7. You are paying for it. I sure do.

You don’t want that option to exist. You don’t want SNO to be able to pick and choose what data gets preferential treatment on the network. It’s a horrible idea.

5 Likes

In config.yaml there is a max concurrent connections setting that can roughly throttle your ingress, I believe.

But i decided I will just let my ISP send me a nasty letter if they get annoyed by my data usage. (it’s AT&T btw). During the crazy test months my usage was around 40TB. It’s merely 13TB or so now. They haven’t complained yet.

1 Like
# how many concurrent requests are allowed, before uploads are rejected.
storage2.max-concurrent-requests: 20

This is for uploads only, there is no limit for downloads.

Uploads - it’s ingress. Don’t you think it is strange to limit clients in downloading their data? )

It doesn’t matter if my internet connection is congested by uploads or downloads. This feature should be implemented for both directions.