StorJ consumes a lot of IOPS, affects other loads on the discs

This is just me sharing experience from running StorJ node.

You know, that StorJ promote that idea, that you can put unused resource (disc space) into use and even get paid for it.

It works great for unused disk space. But you also need to think about IOPS.

Recently I had that idea, that I use 16TB HDD for video surveillance (Frigate) running in Ubuntu VM, really need around 1TB, and the rest - for StorJ. Currently I have about 5TB used by StorJ.

Well, I found StorJ sometimes uses so much IOPS on that HDD, that Ubuntu disk access latency grows really bad. And when the latency is larger than 8seconds (!) - VM crashes.

I created the bug for this issue for the Hypervisor:

Because the high latency should not result in VM crash.
But as SNO you better know, that StorJ can put so much IOPS load on HDD, that the idea, that StorJ using only “unused” resource is not really practical.

With StorJ you need to think what other loads are on the same disc.

Valid scenario is to store backups on the same discs, as StorJ. Because saving backups is not latency-sensitive.
But any latency-sensitive application would have issues running on the same HDD as StorJ.

1 Like

This is the worst possible scenario. Requirements for video and storj are the opposite.

You have sequential IO with a few massive files versus random io with hundreds of millions of small files. Well optimized surveillance system won’t support that.

You need fast access to metadata to those millions of files. Not a requirement for video.

In the end — don’t run storagenode on an NVR

1 Like

You may run a node on NVR, just not on the same disk :slight_smile:
But yes, the load is very similar to NAS.

Good anecdote. My storj disks are too busy filling up or deleting files to put much persistent data on.

(although sometimes I will fill in unused space with chia plots. chia farming uses almost zero iops).

But yeah, while all my storj disks are in my home server, my home data is on separate disks.

Not necessarily, just depends on the load type.
For example, my nodes shares disks with the hypervisor and several VMs (not so much as in the past, but still), they seems all happy.
Those VMs not exactly about the storage, just their VHDX there, they all about networking and stuff (k3s cluster with Longhorn and the small Nomad cluster on three nodes + Consul and Vault in particular).