Answers for DRBD time-travel issues
A little update on a DRBD problem we wrote about at the start of April, in which in which we lost a few months of data during a cluster failover.
Linbit got in touch with us to offer assistance, and we were happy to be enlightened. We had a good idea of what had happened, but no idea why.
It seems that a race condition was introduced in version 8.3.9, when the fence-peer script was changed to run asynchronously. The engineering team explained that if the connection is reestablished while the script runs, it may happen that the peer’s disk-state gets overwritten with stale information.
This was fixed in 8.3.11, and of course we’re running version 8.3.10 on the cluster in question. We’d like to thank Linbit for their assistance and expertise in sorting this out, we’ve already started testing our plans for an upgrade.