I am looking to make a giant PC to make virtual machines and on them install storj

Im running 1gig for my nodes and 10gig total bandwidth. I have not ever seen over 250Mbit used for my nodes… im going on 13 months running my nodes. I still dont make 175 a month 100%

The server is 100 online all time?

Yes I have 100% uptime.

https://www.youtube.com/watch?v=VijdQYzfbss its spanish, but look at the graphic, 2 years, 1 node. In one month

Yes like I said the calculator on the website is best case and its not a realistic number.

Min 4:14 the calculator is at the beggining

Right but your not figuring in the time it takes to start getting useful data. You dont get paid for ingress your get paid for egress.

How many time aprox? 2 years?

The problem is right now since you werent an early adopter you will take longer to make your money back, Since I started early I got surge payouts included in my payouts which im already in the positive for running my nodes anything I make now is all income.

So the best option is, make a low system with a lot of HDD and let it be connected so long, right?

Yes exactly that is the best way you want it to be as efficient as possible.

1 Like

or make a server you intend to also use for other stuff which then also is a storagenode for storj…
but having a cheap storagenode server ofc makes it possible to do even more crazy stuff with the powerful computer instead… as with running a storagenode you really don’t want to much down time… so it very much depends on your use case…

my storagenode is 2½ months and thus far it’s got a total payout of 3$
but there are 20$ withheld and the webdashboard claims it’s earned almost an additional 14$ thus far this month, ofc i will only get like 10-20% of that 20$ i most likely get from this month, and then whatever surge there is if any, but with the network issues storj has had this last month i wouldn’t be surprised if there was a surge payout… but i duno…

not really a short term project this… and it takes a long time for a storagenode to fill… especially since each node can be up to 24tb and the avg ingress seems to be about 3mb/s in my case… maybe a bit less…

This is based on peak load and still quite theoretical. So what if all nodes you run peak at the same time? Doesn’t really matter as the amount of traffic is spread among them and each individual node will simply use less RAM as a result. There will of course be some overhead, but while I would say 1GB RAM is not enough for a single node, 2GB is probably plenty for 2 nodes.

This isn’t relevant when running GUI nodes, they don’t rely on any virtualization but run natively on windows.

You still seem to think the number of nodes matters. It doesn’t, you can run 10000 nodes and you will still get the same amount of traffic across all of them as you would with just 1 node. The only reason to run more than one node is if you want to share more than one HDD.

So even if the calculator was accurate, this number would apply to one node or the total of all 15 nodes. You can’t just multiply the number.

However, the calculator is incredibly inaccurate. I’ve made an alternative in Google Sheets. You seem to want to share 15TB. I filled in your values for now, but if you want to play with the inputs, please save it to your own Google account. I don’t grant edit access to this sheet (people keep requesting that though for some reason).

It honestly baffles me how people think they can make thousands with just 16TB. How do you figure those economics work out? Think about what the customer would have to pay for so little space? Storj is not a get rich quick scheme and it’s not mining. They offer an affordable storage service to customers by paying node operators a reasonable compensation.

Honestly, if you bought this hardware just for Storj, you should seriously consider returning it. It’s complete overkill and the high electricity bills will make it even harder to break even over time. I run 3 nodes for a total of 22TB on a simple NAS. You don’t need such power and it is in fact wasted energy. There are tons of people running successful nodes on Raspberry Pi4’s.

8 Likes

1GB is plenty for a single node, at least on Linux. I have a system doing nothing but running two storagenode containers. In the last six months of operation, the host system has never exceeded 350MB used by all processes, so that’s including the rest of the OS. Average memory use is 180MB.

I would say that 512MB is easily enough to run two nodes, let alone one. (Of course, more I/O cache is better, but there’s no reason 1GB would be required.)

my storagenode in docker uses like a few hundred MiB’s [insert alien abduction joke here]
tho my ARC uses 23.4GB and to be fair… i’ve been thinking about expanding on that because it’s pretty much 100% effective 98% of the time, meaning it could easily benefit from more… lol
but i suppose it’s always like that with an ARC

Many 1GB RPi nodes ran into OOM errors in the past. Now there have been software optimizations, so it has gotten better, but it’s barely enough during the highest peaks. Either way, that wasn’t my point. My point was that running n nodes would not lead to n times the amount of memory use.

Raspberry Pi3 B+

1 Like

thats kinda interesting…
is it just me or the last few times i’ve checked my storagenode didn’t take up much memory, like 100mb
bit this time i just checked and im basically at 600mb the same as yours… does that mean memory usage of the nodes are generally the same across the network at the same time… ofc i assume reboots and what not may affect it…

but generally would nodes use the same amount of ram based on… size, activity, age or unknown, for stuff like zfs it will also use dedup in ram, thus maybe having many nodes might actual use less RAM than for other filesystems.

ofc it would never be more efficient than having one node :smiley: but it can mitigate the costs of more a bit.

I think it really depends on activity. Usually my ram usage is between 50 and 100mb. But I also often see spikes with 1,8GB.