How many failed audits are acceptable?

What is your device?
If that is not Synology, then please, change the filesystem ASAP

There is currently (2019-07-07, linux ≤ 5.1.16) a bug that causes a two-disk raid1 profile to forever become read-only the second time it is mounted in a degraded state—for example due to a missing/broken/SATA link reset disk (unix.stackexchange.com, How to replace a disk drive that is physically no more there?). It is probable that this issue is mitigated by using three-disk raid1 profile filesystem rather than a two-disk one. With two copies of all data and metadata spread over three disks the filesystem can lose any one disk and continue to function across reboots unless a second disk dies, because with two surviving devices, two copies of data and metadata can be made. The “filesystem becomes read-only” bug is avoided, because it is only triggered when it becomes impossible to make two copies of data and metadata on two different devices. As an alternative, Adam Borowski has submitted [PATCH] [NOT-FOR-MERGING] btrfs: make “too many missing devices” check non-fatal to linux-btrfs, which addresses this issue, which is also addressed by Qu Wenro’s yet-unmerged Btrfs: Per-chunk degradable check patch. The thread surrounding Borowski’s patch is an excellent introduction to the debate surrounding whether or not btrfs volumes should be run in a degraded state.

https://wiki.debian.org/Btrfs

Another key issue of BTRFS:

Raid5 and Raid6 Profiles
“Do not use BTRFS raid6 mode in production, it has at least 2 known serious bugs that may cause complete loss of the array due to a disk failure. Both of these issues have as of yet unknown trigger conditions, although they do seem to occur more frequently with larger arrays” (Austin S. Hemmelgarn, 2016-06-03, linux-btrfs).

Do not use raid5 mode in production because, “RAID5 with one degraded disk won’t be able to reconstruct data on this degraded disk because reconstructed extent content won’t match checksum. Which kinda makes RAID5 pointless” (Andrei Borzenkov, 2016-06-24, linux-btrfs).

2016-06-26 Update
Once again, please do not use btrfs’ raid5 or raid6 profiles at this point in time! In the thread [BUG] Btrfs scrub sometime recalculate wrong parity in raid5 Chris Murphy found the following while testing the btrfs raid5’s ability to recover from csum errors:

I just did it a 2nd time and both file’s parity are wrong now. So I did it several more times. Sometimes both files’ parity is bad. Sometimes just one file’s parity is bad. Sometimes neither file’s parity is bad. It’s a very bad bug, because it is a form of silent data corruption and it’s induced by Btrfs. And it’s apparently non-deterministically hit (2016-06-26).
In another email in this thread, Duncan suggested “And what’s even clearer is that people /really/ shouldn’t be using raid56 mode for anything but testing with throw-away data, at this point. Anything else is simply irresponsible” (linux-btrfs, 2016-06-26).