mdmon [--all] [--takeover] [--offroot] CONTAINER
- array_state - inactive
- Clear the dirty bit for the volume and let the array be stopped
- array_state - write pending
- Set the dirty bit for the array and then set array_state to active. Writes are blocked until userspace writes active.
- array_state - active-idle
- The safe mode timer has expired so set array state to clean to block writes to the array
- array_state - clean
- Clear the dirty bit for the volume
- array_state - read-only
- This is the initial state that all arrays start at. mdmon takes one of the three actions:
- 1/
- Transition the array to read-auto keeping the dirty bit clear if the metadata handler determines that the array does not need resyncing or other modification
- 2/
- Transition the array to active if the metadata handler determines a resync or some other manipulation is necessary
- 3/
- Leave the array read-only if the volume is marked to not be monitored; for example, the metadata version has been set to "external:-dev/md127" instead of "external:/dev/md127"
External metadata formats, like DDF, differ from the native MD metadata formats in that they define a set of disks and a series of sub-arrays within those disks. MD metadata in comparison defines a 1:1 relationship between a set of block devices and a raid array. For example to create 2 arrays at different raid levels on a single set of disks, MD metadata requires the disks be partitioned and then each array can be created with a subset of those partitions. The supported external formats perform this disk carving internally.
Container devices simply hold references to all member disks and allow tools like mdmon to determine which active arrays belong to which container. Some array management commands like disk removal and disk add are now only valid at the container level. Attempts to perform these actions on member arrays are blocked with error messages like:
Containers are identified in /proc/mdstat with a metadata version string "external:<metadata name>". Member devices are identified by "external:/<container device>/<member index>", or "external:-<container device>/<member index>" if the array is to remain readonly.
Note that mdmon is automatically started by mdadm when needed and so does not need to be considered when working with RAID arrays. The only times it is run other than by mdadm is when the boot scripts need to restart it after mounting the new root filesystem.
As mdmon needs to be running whenever any filesystem on the monitored device is mounted there are special considerations when the root filesystem is mounted from an mdmon monitored device. Note that in general mdmon is needed even if the filesystem is mounted read-only as some filesystems can still write to the device in those circumstances, for example to replay a journal after an unclean shutdown.
When the array is assembled by the initramfs code, mdadm will automatically start mdmon as required. This means that mdmon must be installed on the initramfs and there must be a writable filesystem (typically tmpfs) in which mdmon can create a .pid and .sock file. The particular filesystem to use is given to mdmon at compile time and defaults to /run/mdadm.
This filesystem must persist through to shutdown time.
After the final root filesystem has be instantiated (usually with pivot_root) mdmon should be run with --all --takeover so that the mdmon running from the initramfs can be replaced with one running in the main root, and so the memory used by the initramfs can be released.
At shutdown time, mdmon should not be killed along with other processes. Also as it holds a file (socket actually) open in /dev (by default) it will not be possible to unmount /dev if it is a separate filesystem.
mdmon --all-active-arrays --takeover
Any mdmon which is currently running is killed and a new instance is started.
This should be run during in the boot sequence if an initramfs was used,
so that any mdmon running from the initramfs will not hold the initramfs
active.