Storj Vetting & Operation - Multinode - SingleIP

I think it’s both electricity costs and noise. A lot of Unraid users are coming at the home server game by adding a bunch of drives to an old gaming PC, so it might be sitting in the computer room next to their new gaming PC (brushing the community with really broad strokes).

I flirted with Unraid coming from an Ubuntu server in my basement running ZFS all day so I just yolo’d it with all drives running all the time. You actually get better write performance to the array if you use reconstruct write (i.e. turbo writes), which performs slightly similar to other real-time RAID systems in that it reads all the remaining drives while writing in order to write parity. By default, this is turned off to prevent “unnecessary” drive spinup. This results in half-speed writing to the array because it needs to both read and write parity while files are written to the array.

The OS-managed write cache is one way to deal with the slow performance of the array if one doesn’t want to do turbo writes. Plugins exist to help users hunt down the cause of drive spin up and optimize which files stay on the cache vs what files are at rest. Optimizing hard drive spindown and spinup is just ingrained in the Unraid ethos.

For a media server, I can understand the perspective to some extent, keep media at rest until you’re serving it up to a handful of users or whatever.

On the other hand, for a Storj node, I think a SNO should aim for the opposite of this and keep drives spinning all the time.


Right, I forgot about how I started… Used an odroid hc2 in my living room and also tried to have it spin down all the time so I don’t have to hear the HDD which was annoying. But as you said, for a media/backup server that’s fine because it would be hardly used anyway.
Thanks for reminding me about that.

Any more serious workload like a webserver and especially storj is not going to do well with frequent spin-downs.

Does egress time have any impact on reputation and future data ingress. If I have multiple dives should I try and prioritise the filling of the one that will respond faster?

This will attract more SNO as result, and greater traffic will be shared across greater number of nodes and we will end in the exact the same starting point. The equilibrium of supply and demand.

Not really. The node selection is random. Do not try to prioritize anything - you will not gain anything from that but can broke the online score and suspend your “deprioritized” nodes.

Thank you, yes I’ve been on the fence about on vs. off. Either way the drives I’m using for storj are dedicated to only this share and are set to “fill up”. The drives being used by stroj are set to never spin down once they begin receiving data. That said, you may be right about the choke points and parity workloads. I will do some testing after my node is vetted and see which makes more sense.

I’m not obsessed but I have 24 3.5 drives in a single chassis and as such, the vibrations are substantial when all are spinning, thus less spinning == less vibration == longer life time

good point. :slight_smile:

Well with more customers we could stop relying on test data to get a decent payout.
If test data is decreased carefully as customers join in the balance should remain the same.

I don’t think that the long term goal is to have 60+% of our payout come from test data and thus directly from Storj. From what I see we’re still a long way from being able to earn a decent payout with only customer data, so I’d agree with @geeksheikh in saying that we need a bigger sales/marketing team.

1 Like

If you keep them running and let them spin 24/7, they’ll last longer. I have 24 3.5 drives in a single chassis, and 12 3.5 drives in another single chassis too.

2 posts were split to a new topic: How amazing would it be to begin truly competing for warm/cold storage data in the enterprise

2 posts were split to a new topic: The website homepage never loaded after 5 minutes of waiting

Then you can stop the Signup for new Nodes. There are several ways to hold the balance between Customers and SNOs, but no marketing for new customers will be the worst i think :wink:

It doesn’t needed - the network will rebalance itself. My point that you should not expect a huge grows even when a big customers will come - we will reduce a test load and SNOs will add nodes. The result almost would not change - the load would be similar like now.


And why does it seem to me that the IP condition, and all sorts of checks, are only needed so that the organizers make money on those who launch the nodes for 1 -30 days looks that there is nothing to earn here and leaves.
The service for the sale of space is not advertised anywhere, because of this, I have content on semi-annual nodes at 15GB per day at best.
Every week some patches are released, which from my point of view are senseless and stupid until one global problem for end users using is solved. The absence of a simple backup program with STORJ support (I don’t remember duplicati, it cuts files into pieces, and nothing can be restored without it) and a second human-readable file browser with connecting disks as local.
And advertising services. This is nonsense for 14 months 3.6 TB !!!
Let people earn!

or even just uplink: Uploading Your First Object CLI - Storj DCS
MongoDB Ops Manager Backup - Storj DCS

or setup Gateway: Self-hosted S3 Compatible Gateway - Storj DCS and use it from your browser http://localhost:7777

By the way, the storage is not where you would earn, so backups are not so interesting for SNO, they are rarely downloaded back. More interesting options are: