Updates on Test Data

I feel like you can still use a pretty basic node, and it should work fine. If your node was only able to handle 512KB/s of random writes, that’s still about 1.2TB/mo ingress.

The conclusion that they take from the stress testing should be that they really just need to implement better backpressure, so that slower nodes can still contribute where they can rather than completely crashing. If a node can’t keep up, then it naturally will get paid less (less ingress => less data stored => less paid, while less egress directly translate to less pay). With how the erasure coding and races work, it’s not like they need every node to be running at peak performance at all times.

3 Likes

I think you are misunderstanding the message.

“Use what you have” means don’t buy hardware solely for storj.

It does not mean that it will run on anything you have laying around.

If all you have is a potato, and it serves the existing purpose well — then don’t run storagenode, as opposed to upgrading your potato just for the node.

If you have a capable server that is idle — sure, run storagenode, you will get paid handsomely for the minuscule extra load. Yes, load is minuscule. Yes, payout is free money.

4 Likes

Why should I have such thing? Of course I buy hardware for storj. :joy:

If you are joking — lol, haha, but if you are serious: — because nobody buys hardware that 100% fits current job with no excess.

When you buy drives you buy larger than to address todays needs, so you don’t have to buy them again tomorrow. When you pick a compute device you also have quite a discrete set of performance levels. Same with ram and other system components. As a result every system always has plenty of spare unused capacity.

For home users with NASes this is most egregious: CPU on those machines is always idle, disks always spinning and half the space is always empty.

That’s where storj comes in: instead of that unused resources to go to waste — it helps put them to work. Even if it paid nothing running a storagenode would have been a responsible green thing to do. But it also pays!

As an anecdote,

my server at home, that I put together from old server parts for few hundred bucks and used disks, that I got on sale on eBay at about $8/TB, runs NVR, homebridge, plex, and zerotier, over which a bunch of extended family members backup to it from all over the world. It has 52 CPU threads, 30TB of free space, and 4 empty bays.

It consumed power regardless of utilization pretty much. It cost me $68 in electricity cost to run monthly.

Could I have them backup to cloud storage and save money — maybe. But where is fun in that?

3 Likes

Hm, I never tried to run a node on the same laptop which I use for my stuff, I just have a home server with a free storage and the extra capacity regarding RAM and CPU, which are not fully used by other running services there.
But I guess I need to try, unfortunately my laptop have only 26GB free (from 1TB SSD). However because it’s SSD, I likely wouldn’t notice storagenode, the Chrome consumes much more resources and also WSL2 with a Docker desktop (I usually have not more than 0.5-1GB of free RAM during the work).
However, I believe that laptop is not suitable to work 24/7, unless it’s only purpose (for example, you have an old not used laptop or partially broken, like died battery or cracked screen and thus not used, in that case I guess you may use such laptop as some kind of “server”, otherwise it would go to the trash since the repair would cost more than a new one…).

1 Like

I have potatos, but they don’t last a week. We like french fries too much. :stuck_out_tongue_winking_eye:

3 Likes

damn! I just ordered 512gb ddr4 LRDIMM :sweat_smile:

Our numbers are looking promising. I am sure we will experience some growing pain but nothing we can’t handle. There is one variable we would like to get your feedback on. How many hard drives (free space) do you have on standby? If you could please double check the free space your node is reporting that would help us to model out how much capacity we have available.

It is a bit early for the following question and please keep in mind that non of the larger deals we are looking at have signed yet. Can we get a reading of your standby hard drives as well? For example I have 8 hard drives connected to my system but only 4 of them are currently connected to power. So in order to give an accurate reading about my free space I have connected them to power and increased my free space setting accordingly. I might have to power them down if these deals don’t get closed but running them more or less on idle for some time is affortable for me.

And one more important detail. We need to cleanup as many suspended accounts as possible to free up as much space as possible. This will reduce your payout for the current month and it will look like we are trying to murder your storage node. Please be assured we have the opposit in mind. There is a lot of customer data we would like to send to the storage nodes later on and the more space we free up the more bandwidth we have available to split the load.

6 Likes

Good.

SNOs are used to pain.

How much space do you need? Speaking for myself I can add space. I am sure other can too.

Please solve this: https://review.dev.storj.io/c/storj/storj/+/12806
I have turned off used space calculation on several nodes so they are reporting garbage.

We are getting used to get stabbed in the back.

7 Likes

We can add a lot of space if you need. Remember that we have 1 IP limit so… it would be useless to add a lot of space in a common setup.

2 Likes

That is a bit harder to estimate. At the moment I am estimating that we would fill 75% of the nodes (doesn’t mean 75% of total space!). So technically we don’t need any extra space. The point is adding more space on these smaller nodes would give us more bandwidth to work with. I am trying to model that out ahead of time. (There are some mistakes included in this calculation. I need to improve my calculations next week.)

There are also some options we haven’t invested into like reducing the storage expansion factor or a modification on the node selection. Having more free space available will give us more time for these modifications. I don’t want to do that under time pressure. Having a better understanding of our situation ahead of time will also help us to prioritize better.

Thanks for the feedback. I forwarded it to the team.

Removing that limitation is not an option. If you have too much space in one location the only option would be to join the storj select program.

3 Likes

I’m pretty sure the big majority of SNOs are planning to add new drives when the old ones fills up. Once you started farming Storj, it’s hard to stop. :sweat_smile:
So the free space available is not a problem and will not be a problem. The price per TB is the lowest in a node setup. You can add 20TB for 300$ or less, and next years will see much lower prices.
Your main worry is to keep us interested by bringing in more data. We can addapt quickly.
I assume the actual space will not be filled in a verry short time. I don’t see a big client move PBs of data to Storj in a week. So I believe we have enough time to react.

8 Likes

I’m willing to “add one more HDD” when a node fills. Probably forever: as the rate-of-filling is slow enough that I’d be dead before I’d realistically not be able to buy+add a drive.

I think Storj may have to estimate unseen capacity by assuming “15% of SNOs will always be willing to add 10TB if they fill up” or something. So peak capacity is effectively unlimited. (and nevermind people on other projects who could switch: like even ignoring compression Chia has 500x+ Storj’s used+online capacity)

I am doing exactly that on my node this weekend. The amount of drives I can add to my system is limited. I am connecting the hard drives I have reserved for this exact situation. It takes me a few days to reconfigure everything. I am going to open another thread in the forum later about a benchmark tool we have written. My point here in the community is to give you the information I can early on so that you also have a chance to explore the boundaries of your storage nodes ahead of time like me. It would feel unfair if I wouldn’t give you a fair warning ahead of time. What you do with that information is up to you.

I don’t think that math will work here. My estimation for my storage node is that I have to upgrade my internet connection to fill my drive. I don’t have unlimited bandwidth and so I will stop adding more hard drives at some point to keep my bandwidth usage in check.

4 Likes

Its predicted that with growing AI needs the prices of HDDs will increase.

1 Like

@littleskunk Do you know if the new incoming data will be mostly cold data or are these new customers using it actively with a lot of egress bandwidth coming to our nodes?

1 Like

Lets say hot data with low egress. I say hot here because it will get uploaded with a TTL and be stored on the node for a short time. Maybe up to a few months at most. I would expect a higher ingress bandwidth usage but almost no change in egress bandwidth.

3 Likes

Perfect scenario. Looks like some video monitoring use case.
I’m not gonna add HDD’s, got like 60% free space still unfilled, soo not with this machine.

Would rather perceive more Storj capacity and bandwidth with new nodes, yet to join the network. New homes need to join. With new connections. If You happen to experience real growth, and fast HDDs filling then, it will start solving itself one way or another.

All You can do is to make it easy to join then, and get rid off that unfriendly 12 month “no pay” period. And start paying for trash, like normal people do, then things will be smooth for SNOs.
Unfortunately i wouldn’t count on the miraculous multiplication of bread HDDs.

But rather count every +1TB for like $8-15$/TB (to be bought by SNOs from aftermarket)
If SNOs will see fast enough HDDs filling, like for exg: 14,5TB in 3 months, they will for sure start to take reasonable risks to add HDDs on their own expenses, a 10-12 months and the HDD will be fully paid off. Or You will just take part of chia HDD’s, ppl will find a way, do not worry.

However, i would just focus on making it easy to start, and stay with Storj. (real S3! lol)

What would be the expected ingress rate per node compared to what we have today? As adding drives isn’t as time consuming as upgrading the connection I would say.

But it obviously doesn’t work this way… :roll_eyes:

In reality you forcing SNOs to use things like vpn. I have 20 IPs but all storage space is in the same location. I’m shure this is what most SNOs are doing these days.