Re-implement RAID0/MergeFS in storagenode

definitely yes! the only thing we can ask- giving a possibility to add more than 1 drive to production node without raid or adding to configuration file an option to target each directory (or better say each satellite) to specific folder and more in-memory features

1 Like

A node being able to handle multiple-drives/multiple-directories would be very nice. I’d much rather be able to do that… than have to incubate a new Identity when adding a HDD. But I can live with how it works now :person_shrugging:

1 node per drive and 1 drive per node should always be the only recommended and used option. Why risk tens of TB of data to gain a month of vetting?
Held amount? Not such big of a deal. I’m pretty sure if this goes through the roof, Storj will make some adjustments. They can’t hold hundreds of $ as collateral per node.

3 Likes

if we will be able to put satellite folders on different drives - it will be more reliable imho… loosing 1 drive will be = loosing 1 satellite not 4. but it’s anyway not the topic to discuss here :slight_smile:

The usage is not the same for all sats. You could end up with 1TB occupied in 1 year on a drive for ap1 and 30TB for US1 on the other.

2 Likes

actually it’s a worst idea ever, this node will be using RAID0 in that case. One disk failure and node is dead.
We are not going to re-implement a bad version of RAID any level, there are teams and products evolved for dozen years to do it reliable, so you may use them right now or start another node. Simple.

It will be even more fragile. The check is happening only for storage folder/location, not per satellite. One disk is disconnected and you “lost” the unattached data - the node is permanently disqualified within hours.
No, this is even more worst idea than re-implementing a RAID0 is to to try to implement a bad version of MergeFS.

I have an idea. For sno like me, who can not add nodes because of limits. But can add drives.

May the software work with sub- node identitys,that become the “main” identity for that computer if the “main node” and its drive fails? Like a queuque?Maybe an commandline command to signal that the main node won’t come back?

Note: This started as someone talking about one node handling more “more than 1 drive” (and I mentioned or more-than-one-directory). That doesn’t need to work like RAID0. That doesn’t need to work like MergeFS, so I don’t know where that title came from.

One node install could absolutely still handle one Identity per drive/directory: no different than how one node handles multiple satellites today: the requests specify the one they want.

Each drive/directory could still run (or fail) independently. They can still run their own filewalkers, or pass their own audits, or have their own holdbacks, or be removed or graceful-exited . You just won’t have to install/upgrade 10 nodes and use 10 IPs (or 10 ports) to handle 10 HDDs anymore.

Identities are just a couple files and text descriptors: they can still have their own tracked speeds, and suspension/audit/online scores. They aren’t so special they need their own web service and UIs. Can you imagine if webservers still needed one-install-per-domain? It would be madness: instead they examine the request, and then act appropriately (including using different SSL certs if needed).

No. They are independent and must be in my opinion. Run as much identities as you want. If you want to add them to the account of a reputation system, it must also consider the history. Are you sure that you really what this? Like any DQ node will put your next (nevermind much best) nodes to the bottom of the selection queue? I do not think so.

From the topic started conversation. @vladro suggested to implement this crap to handle more than a one HDD in the same identity. Which is still not feasible in my opinion. Especially in the point of view to re-implementing the RAID or MergeFS behavior in the storagenode’s code (doesn’t makes any sense to me, but well…).
I believe that OS functions must be handled by OS, not by user’s applications…
However, it’s my opinion at the end…

No. They are used to cryptographically confirm the usage. So, it must be a separate identity.
If it’s supposedly should be the same, then it will not differ from RAID0/MergeFS and will act accordingly: with one disk failure the whole identity will be disqualified.

Are we reading the same first-comment? Where does it mention “same identity”?

They mention “the same node”. This is an exact equivalent of the “same identity”.
One identity = one node. No other interpretation.
If it’s should be divided by “paths” why do not run a separate identities?

I believe what’s discussed here is suggest to overcomplicate the setup. You didn’t convince me, so unlikely you would convince our managers to even take a look on this…

Of course there are other interpretations. One node could handle multiple identities. No different than how they handle multiple satellites now. Yes, that would be a change. People are asking for a change.

Like… why not run one node per satellite? Doesn’t having one node that handles multiple satellites “overcomplicate the setup”? Obviously not.

1 Like

Yes and they would be, like many slave under a master program.

But the sats should know they are all on the same hardware and ofc do only offer the requirement bandwidh together.

Sno like me are stuck, unable to expand in bandwidh, drives or lokations.

If you say spin up new nodes without the required bandwidh…i will consider it.

Unfortunately, not. In the crypto world the identity is the only proof what’s needed. Yes, we are not about crypto currencies… but we are about crypto and security of your data, so, the identity is the key, literally.

My opinion - they must not. It could be true for the Storj operated satellites (even there they are not communicating with each other and this is not implemented in the protocol, and must not be in my opinion…), but it will be definitely not true for any Community satellite (why they should trust the Storj’s ones?!).

Some kind… But actually, it would be better if they would be a separate identities (and drives!). This would allow to reduce your loses if something would go wrong (like a not enough usage…) and you may delete it without affecting the main node.

…and one simple web service… could handle many of them.

Look at webservers. They handle hundreds or thousands of separate certificates: underpinning the core of web finance. SSL protects more value moving around the internet in a second than Storj has handled in the existence of the company. Arguably more important than identities: and yet single-install web services manage buckets full of them :bucket: :stuck_out_tongue_closed_eyes:

It’s not that I’m arguing Storj must or should change nodes to be able to handle multiple identities (as a path to supporting multiple drives or paths in a single install). It’s your insistence that it’s not possible that’s puzzling. Of course it is.

I mean like the 24/subnet rule but with the bandwidh . Not sats communicating to each other.

Wouldnt it be better to hardcode the nodes on the same hardware together,but still independent identitys instead of allowing to trick the network in to thinking they are not?

Like to specify the 2. Path and identity/database and log path+loglevel in the yaml?

Or maybe just add this exeption to the tos/requirements?

Running multi nodes is already possible. But limited by requirements.

You are correct, I would say that it’s not possible without the code change. But what’s value of that?
@Roxor @daki82
Why you cannot run a separate process for the other disk?

1 Like

For me, if i have 50Mib downstream,(at least in 2 months after moving, now i have 100 but if i spin up a new node it will later violate the requirements.) its enough for 2 nodes. 12 and 20TB disk storj only, pc’s are not. They are 50-80% filled.
Migrating would be a pain, propably not profitable or shrinking the nodes extreme.
I have 2 bays free for disks up to 16TB each, plenty of cores and ram.
Earning ±17€/M

Additional internet line possible but 20-60€/M
(I may consider it maybe later, if the big client comes on board.)

You see the upcomming problem?

Your CPU has only 2 cores? By the way, how hard they are loaded?

with additional services running? How is it related?
If your node is full, but you have a free space which you want to share to the network, you may spin-up the next one and it will get the full possible load.