Backup storage node

I have been a node operator in storj for a couple of days. I have the following questions:

-How to backup the node storage?

  • How to restore the backup in case of having to replace the hdd by break?

At the moment you cannot backup, because if you restore an old snapshot you will be disqualified very soon since from the satellite’s perspective you lost storage pieces.

1 Like

At the moment it is impossible. I have created a suggestion for this though.

A backup of the stored data is not useful, because the data store is dynamically changing every few minutes. Each node’s reputation is based on auditing of the current data…

A backup data store will be missing any new data, as well as have extraneous data that was already deleted. The missing new data will cause a node restored from backup to become disqualified rather rapidly.

So, there will never be a method of backing up a Storj node data store. Such concepts do not apply to the nodes.


the data on the storagenode is erasure coded data (basically live data/math constantly being recomputed by the satellites to be sure storj don’t loose data when nodes fail or exit / disconnect) and due to this any backup is difficult at best and currently the only existing way to provide redundancy for your storagenode data in case of harddisk failure is raid or similar solution.

raid involves using like something like 5 disks with 1 redundant which is generally viewed at being unsafe, but ofc it would be better than no redundancy, however you would ofc sacrifice 20% of the capacity. where if you put one node on each drive you would be able to run 5 nodes instead of one big one 4/5th the size…

so in the end it comes down to a lot of practical aspects of how your system is and what you plan to expand into.

This does not make any sense and if true the network is very poorly developed.

It would be wise to try understanding how the network works before making such conclusions.

(The following numbers are from memory and they may be incorrect, but they should at least be close.)

Each uploaded piece is split into 130 chunks using erasure coding; any 50 chunks can be used to recover the uploaded piece. Those 130 chunks are uploaded to storage nodes in different /24 subnets and the first 80 to report success are committed; the rest are abandoned.

This means that there are 80 chunks stored on the network and only 50 are needed, so there is 60% redundancy; 31 different storage nodes all holding a chunk for the same piece would have to fail at the same time for that piece to be lost. Because no two chunks of the same piece are stored in the same /24 subnet, the chunks should also be fairly well distributed geographically, reducing the chances that a natural disaster would wipe out 31 chunks.

Once the number of available chunks for a given piece falls below a certain threshold, the network triggers a repair on the file. The piece is reassembled and split back up to more nodes to restore the 30-piece redundancy.

Backups of individual storage nodes are not useful because they are constantly receiving new information – a backup is out of date the moment it is created. Moreso, such backups are not required because the network can tolerate the total loss of many storage nodes and still recover the data they held thanks to the built-in redundancy.


think of it like this… the satellites are aware of which nodes are online… on each node they have parts of sets of strings that are erasure code, these codes are made from the data that needs to be stored.

so long as there exists a certain number of these unique pieces of the string the data is safe… when it goes below a certain mark the data becomes in danger of being damaged and then it would need to do some kind of magic recovery process i don’t really understand…

the satellite will when the string becomes in danger start to recalculate the erasure code, basically making it longer… so there are room for pieces to be lost…

thus the data is continually evolving mathematical algorithms and thus you cannot simply archive them in a backup, because they will not fit into or compliment those alive in the cloud when the backup is restored…

i’m sure there are some very smart people coming up with ways to back this kind of stuff up in a live system… but i’m not sure it’s even really possible… its like the data is alive in the cloud created by the computers and if you take it offline then its dead data… atleast if its offline for any extended period of time…

you can however make your data storage safer by introducing redundancy… which isn’t really backup, but it might as well be in this case, because i doubt there are any other ways to deal with the issue…

It doesn’t seem fair that a node loses all its reputation in the face of a disaster because you consider it not useful that we can protect ourselves with copies of our node in the event of a disaster.
this starts to look like a triangle to me

Keep in mind that nodes are designed to be somewhat disposable. If you have 5 HDDs you should run a separate node on each HDD. If a disk dies and you want to replace it with a good disk, just start a new node. Modern disk failure rates are pretty low and you should be able to get a good 4-5 years of income from a node.

Backups are actually kind of expensive. You need double the capacity so you can take your backup. If you have an 8TB node you need an 8TB backup disk. Why not instead use that 8TB backup disk to run another 8TB node and share 16TB of storage? You double your income potential. If one node dies, you’ve made more money sharing the backup disk than you would have protected by making backups.

The more disks you have and nodes you run, the less the impact of an individual node death. It’s just operating expense at that point.


i full agree, but it’s simply a product of how the technology works…

it would also be awesome if we could have our living rooms in cars, but we cannot… or we can, but thats called trains or planes and is yet another different but similar technology.

ofc you could just say the hell with it and do it anyways… but then your car most likely won’t fit on the roads…

it is damn smart tho… living mathematical codes to keep data whole no matter what is lost, there is literally no critical point that can be damaged… so long as the erasure code is long enough that it can lack enough pieces to recompute ofc… but still thats a basic measure and it’s just math to renew it…

its like raid, but with math as harddrives and processors as manufactures… most likely the future of storage.

You can read this interesting information: