Node operating with remote drive

I’ve tried a setup like that about 2 years ago, and at that time it was close to unusable even at hundreds of gigabytes. The node consumed large amounts of memory, and was failing most requests at times, even despite that the storage was basically next rack, not going over the Internet.

Today I suspect it may work in some circumstances with a very, very careful setup. As you noted, and @Alexey will quickly confirm, this is a not supported configuration, and as such cannot be operated on the main network by T&C, but you can try it on, let say, the test network. You need to:

  • Ensure that your remote file system will not fail randomly. Usually this means a hard mount, i.e. a way to make sure that in case there is a problem connecting to your remote drive, the file system will retry the connection, as opposed to passing an error to application. See some explanation on ServerFault.
  • Store databases locally. They’re small, so it’s usually not a problem.
  • Disable the initial piece scan.
  • Disable forced syncing (requires manual recompilation, the official releases do not allow this).
  • Enable all file attribute caching you can afford, even at a risk of lack of consistency. Even with disabled initial piece scan, the node will need to periodically scan all piece metadata, essentially listing all files stored. Here’s some example for NFS.
  • Not use Windows. I just can’t imagine most of the above to work reliably on Windows :stuck_out_tongue:

Of course, with all these changes your node will be a lot less reliable. Whether this is worth your time, and how strong is your “need to setup a node” this way, you should decide on your own.

2 Likes