Technically it currently is 29/52/80/110 instead of 29/35/80/110
29 is bad for performance reasons. It does impact the piece size. Storage nodes are currently forced to store pieces with an odd size that doesn’t naturally fit hard drive sector size. We are going to change that to 16 or 32 instead. That should give us a better performance on the storage node side.
In terms of repair traffic, we noticed that we are in a good spot. We haven’t lost a single file. The first set of numbers was selected with a conservative intend in mind. For the next iteration, we are looking for numbers that would theoretically reduce the durability (from 100% to 99.9999…) but that should give us some advantages in terms of storage expansion factor, storage node payouts, performance. We just need to be careful to not get too aggressive. Ideally we keep the durability very close to 100% and get all the other benefits at the same time.
We are still working on that. We have a few candidates that we want to test on saltlake or europe north first. We want to test out how expensive the repair traffic would be, the durability of the file with a large enough set of data and last but not least upload and download performance.