Currently, we prevent clients from processing requests by taking
the server I/O lock. This leads to requests hanging for a long
time before being terminated when the migration completes, which
is not ideal. With this change, at the start of the final pass,
existing clients are closed and any new connections will be closed
immediately (so no NBD server handshake will be seen).
This is part of the work required to remove remove the server I/O
lock completely.
If we're above max_bytes_per_second once we've finished a transfer
(8MB chunks, worst-case) then we delay the next transfer until
all_dirty_bytes / duration < max_bytes_per_second - checking once
per second.
If this isn't good enough, we can improve it - leaky bucket is one
option. To begin with, though, we'll mostly be using this to set
max_bps to either 0 or 100MB/sec or so. So it should be fine.
The idea behind this feature was to avoid the client thread in a listen
server getting stuck forever if the mirroring thread in the source died.
However, it breaks any sane implementation of max_Bps in that thread,
and there are lingering concerns over how it might operate under normal
conditions anyway.
Specifically, if iterating over the bitmap takes a long time, or even just
reading the requisite 8MB from the disc in order to send it, then the
5-second timeout could be hit, causing mirroring to fail unnecessarily.
It's not actually honoured yet, and ideally, you'd also be able to set it as
part of the initial setup: "flexnbd mirror ... -m 4G". remote_argv for the
mirror case would need to become x=y z=w format first, though.
Previously, we were setting bits up to the first byte boundary,
memset()ing to the last byte boundary, then ignoring the memset()
and resetting every single bit up to the last one individually,
from where the first for-loop left off.
This should be *at least* nine times faster.
While the mirror mutex is taken, the mirroring can be abandoned and serve->mirror
set to NULL, so we need to lock around reading information from serve->mirror
We're not actually using it in production right now because it doesn't
shut its sockets down cleanly enough. This is a better option than
reverting the functionality or keeping production downgraded until
we sort out a handler that cleanly closes the sockets.