Loop stacktrace discovered in 4MB line in storagenode logs that resulted in a crashed node

from what I tested this week, the node is not suitable for such settings. One disk node is now not very good. With just one node down, all the others start to experience high latency. The io is being loaded and the delay is increasing, other drives are queuing up :slight_smile: storj gets to the point where it needs an already powerful file system to handle such loads. But destroying ssd/nvme for such a price is not profitable. Therefore, I hope for optimization in the future.

the node slows down, both on a $500 disk and on a $100 disk :slight_smile: in any case, with such a load, an ssd/nvme cache is needed. and these are already additional constant costs :slight_smile:

yes, the same error as before. Did you try to optimize the filesystem?

This is already implemented in the node selection:

You can increase a check timeout, if you know that your HW is ok, just cannot keep up with the load.
One disk systems are still can work (all my three nodes are configured as one node per disk and they still can work without an SSD cache), but perhaps not all setups. Unfortunately I cannot check on my Pi3 now, it cannot boot anymore (the SD card is failed), but it’s in another country, I do not know when I could be able to fix it. So I just switched it off by a smart plug.

1 Like