Realistic earnings estimator

If I may what are the good and the bad months? So I can take that into account with vetting nodes etc

Thank you.

Give me a moment while I fetch my crystal ball.

We’ve seen months that we’re half as profitable and up to twice as profitable as the average. But it’s impossible to predict and with the network still in it’s early stages, that average may still shift a lot as well. The estimator shows past behavior, but is no guarantee for future behavior.

2 Likes

A safe assumptions is always $1.5/TB stored :smiley: (since you get paid $1.5/TBm, so it would be storage with almost no egress).
The ingress however was rather variable, some “bad” month were just around 500GB/month ingress. So it takes a long time to fill 8TB.

2 Likes

@HisEvilness additionally, if your download bandwidth isn’t the best, you may actually loose the data you’ve stored if we go through a large delete spell. The faster nodes will win any new data more often, and if a large customer purges data (say an old archive), you may see your total used space decrease. There is a balancing act to it, where in a perfect (and simple world) it would look something like:

your_new_data = (your_download_bandwidth / global_network_download_bandwidth) * time
your_deleted_data = global_deletions * (your_previous_space / previous_global_storage_space)
used_space = your_previous_space + your_new_data - your_deleted_data

However, it isn’t purely distributed based on your percent of global bandwidth, latency also matters; additionally, statistics comes into play, further skewing towards faster nodes (especially once the network grows sufficiently large where the small and slow nodes are no longer required to meet demand).

bandwidth is completely irrelevant, unless you have less than 10MB/s (~100mbit), in which case, during high ingress spikes you might not be able to get all files downloaded in time. Above that, it doesn’t give you any advantage at all. Distribution is at random and doesn’t have anything to do with bandwidth of a node. The real effect of bandwidth isn’t big enough to consider for estimating your earnings or your ingress.

The estimator does look at your speed when it’s below 50mbit. This is at best a rough estimation though. But anything above that really doesn’t matter. I think between 10 and 50 you may still see some impact on peaks.

1 Like

I am just trying to estimate how long it will take to fill a node etc, all I really want from it is to pay for the running cost and then my internet connection saving me some money. Not that I expect this to be 100% profit it is like a start-up takes a bit to get into the green numbers and out of the red. And there is no set data ingress per month that I can count on but I was trying to figure out ballpark numbers. When are good months or better put normally better months where in general there is more traffic?

As far as my server that I also intend to use for some other things then StorJ, it has a 3700x with a tweaked PBO 2.0 settings profile. Internet wise I am on a 1000/100 connection. 64 GB of DDR4 @ 2666. Logs files look good my ratio is 99.9*% across the board when running some of the scripts on the log file. So the server is pretty snappy running an Unraid OS with VM’s to host the StorJ nodes on an SSD. Each node has 3GB of RAM but i might consider 4GB as the script caps out 3GB really quick as well so I do not want a bottleneck my setup.

The last few months have been historically slow from what I can gather. I’m at 700GB after just under 3 months. There is a spread sheet here that says 200 the first month then 800 every month after that I believe. I believe the expection is lower than that right now

I’d calculate with 500GB per month ingress.
If you expect the node to pay for your (oversized) server, then it might take a while… If you instead just use a Raspberry Pi/odroid hc2, then you’ll quickly get to the point where it pays for your running costs (electricity). But for your server it might take a while to pay for your electricity. My homeserver needs ~64W with 3 HDDs, which is about 10€/month and I’d need at least 5-6TB to cover that if egress is low. Getting to 5TB could however take 10 months. Hope that helps a bit, but as always, ingress might change (but was rather low recent months so I wouldn’t expect too much change, especially as it gets lower with each SNO that joins the network…)

2 Likes

Isn’t that too much?
The node software does not use much RAM at all: My 6 nodes running on a single RPi4B use up to 500MB in total out of 4GB (Linux OS included!). It only goes beyond that temporarily when activity is very high, or if disks are starting to stall (which is something that should be avoided at all cost… because when that starts, the OOM Killer is the next “destination”).

Right now, here is the current load of this machine:

top - 13:42:27 up 93 days,  4:23,  1 user,  load average: 1.07, 1.33, 0.98
Tasks: 165 total,   1 running, 164 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  5.2 sy,  0.0 ni, 64.9 id, 22.1 wa,  0.0 hi,  5.2 si,  0.0 st
MiB Mem :   3906.0 total,   1317.0 free,    363.2 used,   2225.8 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   3263.5 avail Mem 
1 Like

My nodes run on VM’s made out of windows 10. I am looking at perhaps running them on Windows Server 2019 since it runs cleaner than Windows 10 with all the bloatware. You can run VM’s on 2Gb but it will be too restricted and I have 64GB anyway so it is not like I will run out of RAM for any project any time soon. Linux you can do far less RAM, yes but I could not get the ubuntu nodes to work properly so I defaulted back to Windows 10. Also, the RAM can act as a buffer with high traffic if that happens(rare atm) and a page file that is with the OS on an SSD, the actual node storage for StorJ is on an HDD.

1 Like

You aren’t running each node in an individual VM are you? Especially with windows you get crazy overhead per VM that way. And there is absolutely no advantage for that.

1 Like

Yeh, I run a VM per node as it was easier to setup. And I made a copy of the image so I can deploy it again if I need more nodes. And an SSD does help with the performance of the page file when using less RAM. For instance, no page file with 32Gb of RAM or more does wonders for FPS in gaming.

You can do all that with Docker on linux though, without any of the overhead.

2 Likes

I tried and i did not work like i said i have Unraid so i might mess about a bit see if i can get this Linux to work as i can make VM’s for it.

A major waste. A debian VM with 4 storj nodes uses less than 512MiB of RAM.

As for your original question, my nodes were growing at roughly 450GiB/month for the last month, which is bad. For comparison, last year there was a period when my nodes got >4.5TiB of ingress per month.

1 Like

Yeh windows is not optimal but more accessible when you want to dip into StorJ. I can also install a Debian VM but i would need to test it out i just wanted to get my nodes out the door and get them vetted to see if this was a nice project. Ingress has been slower yes but 4.5TiB vs 450 GiB is a stark contrast to say the least i guess less data not by much but a lot more SNO’s contribute to that. Let’s hope for more data rich space missions!

The absolute majority of that data came from test satellites, so it’s mostly about storj doing something. BTW right now my nodes are getting 2/3 data from customer satellites.

Hello

I created 2 new nodes (Max 2TB each) in November 2020 and I only have 120Gb on the 1st and 80Gb on the 2nd, I find this very low, is this normal? I have a third node created in March 2020 of 2.5TB full which works very well.

Please, make sure that you use the latest version. We recommend to use an automatic updates: https://documentation.storj.io/setup/cli/software-updates#automatic-updates
In case of Windows the node should be updated automatically.