Guidelines for Managing Mirrors

How Many Mirror Copies Should I Use?

You specify the number of mirror copies you want when you create or extend a logical volume using the -m option to lvcreate or lvextend. The default is for no mirrored copies.

Double mirroring provides even higher availability than single mirroring, but consumes more disk space. For example, with one mirror, when you back up the logical volume, you won’t have the mirror copy during the backup. With double mirrored data, one copy can be taken offline to do the backup while the other copy remains online in the event that something happens to the original data.

Should My Mirrored Data Reside on Different Disks?

Keeping mirror copies on separate disks is known as strict allocation. Strict allocation is the default when you set up mirroring; HP recommends it because non-strict allocation will not provide continued availability in the event the disk fails. Non-strict allocation will still allow you to do online backups.

When opting for strict allocation, your mirror copies should reside in logical volumes on identical disk types, whenever possible.

Should I Use I/O Channel Separation?

By using I/O channel separation consisting of disks accessed through separate interface cards and/or separate buses, you can achieve higher availability by reducing the number of single points of possible hardware failure. By contrast, if you mirror data through a single card or bus and that card or bus fails, you are vulnerable.

One way to set up such channel separation is by using strict allocation to apply to disks that have been separated into physical volume groups. Strict allocation will prevent a mirror copy from being allocated on the same physical volume group as the original.

Should My Mirrored Data Be Distributed With or Without Gaps?

You can also specify that a mirror copy be allocated in a contiguous manner, with no gaps in physical extents within the copy, and with all physical extents of any mirror copy allocated in ascending order on a single disk. This is called contiguous allocation.

By default, the allocation of physical extents is not contiguous; the extents are allocated to any location on any disk. But if a mirrored logical volume is to be used as root or for primary swap, the physical extents must be contiguous. Typically, you get better performance with contiguous allocation, but you’ll get more flexibility with non-contiguous allocation because you can increase the size of the logical volume if space is available anywhere within the volume group.

Should My Mirrored Data Be Written Simultaneously or Sequentially?

You can specify whether your mirror copies are written to disk in either a parallel (simultaneous) or a sequential fashion. A dynamic write policy is also provided.

Parallel writes are more efficient because data is transferred simultaneously to all mirror copies.

Sequential writes, on the other hand, provide for slightly higher data integrity than parallel writes. When you specify sequential writes, the data is written one copy at a time, starting the next copy only when the previous copy is completed.

Parallel writes are the default and are recommended because of the performance loss associated with sequential writes.

Dynamic writes allow the writing of mirror copies to be determined at the time of processing. If the write is synchronous, meaning that file system activity must complete before the process is allowed to continue, parallel writes are used, allowing quicker response time. If the write is asynchronous, meaning that the write does not need to complete immediately, sequential writes are selected.

Which Crash Recovery Method Should I Select?

After a crash, it is usually essential that all mirror copies are brought back to a state where they contain the same data. Two choices are available for configuring mirrored logical volumes to recover consistent data:

The Mirror Write Cache method. The system recovers consistent data as fast as possible after a crash, but at the cost of some performance during routine system use. This is the default method.

The Mirror Write Cache method tracks each write of mirrored data to the physical volumes and maintains a record of any mirrored writes not yet successfully completed at the time of a crash. Therefore, during the recovery from a crash, only those physical extents involved in unsuccessful mirrored writes need to be made consistent. This is the recommended recovery mechanism, especially for data that is written sequentially.

The Mirror Consistency Recovery method. (This makes sense only when Mirror Write Cache is not used.) This has no impact on routine system performance, but requires a longer recovery period after a system crash. This method is recommended for data that is written randomly.

During recovery, the Mirror Consistency Recovery mechanism examines each logical extent in a mirrored logical volume, ensuring all copies are consistent.

Under limited circumstances, you might elect to use neither of the above consistency recovery mechanisms. But if you do, you need to be aware that if your system ever goes down uncleanly during a crash or due to a power loss, data corruption or loss is highly likely. Therefore, the only type of data which can safely be put on a mirrored logical volume with neither of the above methods is data not needed after a crash, such as swap or other raw scratch data, or data that an application itself will automatically reconstruct, as for example, a raw logical volume for which a database keeps a log of incomplete transactions. By using neither consistency recovery method, you are able to increase performance as well as decrease crash recovery time. After a crash, the mirrored logical volume will be available, but there may not be consistent data across each mirror copy. The choice of which recovery mechanism to use (or neither) depends on the relative importance of performance, speed of recovery in the event of a crash, and consistency of recovered data among mirror copies.

You configure a logical volume to use the Mirror Write Cache or the Mirror Consistency Recovery method at the time you create a logical volume or when you change the characteristics of a logical volume. You can implement your chosen configuration using either SAM or the HP-UX commands lvcreate (1M) or lvchange (1M). See the following section for more information.

Synchronizing a Mirrored Logical Volume

At times, the data in your mirrored copy or copies of a logical volume can become out of sync, or “stale”. For example, this might happen if LVM cannot access a disk as a result of disk power failure. Under such circumstances, in order for each mirrored copy to re-establish identical data, synchronization must occur. Usually, synchronization occurs automatically, although there are times when it must be done manually.

Automatic Synchronization

If you activate a volume group that is not currently active, either automatically at boot time or later with the vgchange command, LVM automatically synchronizes the mirrored copies of all logical volumes, replacing data in physical extents marked as stale with data from non-stale extents. Otherwise, no automatic synchronization occurs and manual synchronization is necessary.

LVM also automatically synchronizes mirrored data in the following cases:

When a disk comes back online after experiencing a power failure.
When you extend a logical volume by increasing the number of mirror copies, the newly added physical extents will be synchronized.

Manual Synchronization

If you look at the status of a logical volume using lvdisplay -v, you can see if the logical volume contains any stale data. You can then identify which disk contains the stale physical extents. You manually synchronize the data in one or more logical volumes using either the lvsync command or all logical volumes in one or more volume groups using the the vgsync command. See lvdisplay (1M), vgsync (1M) , and lvsync (1M) for more information.

When Replacing a Disk

In the event you need to replace a non-functional mirrored disk, you should perform the following steps to ensure that the data on the replacement disk are both synchronized and valid:

Run vgcfgbackup to save the volume group configuration information, if necessary.
Remove the disk from the volume group using vgreduce. (Optional)
Physically disconnect the bad disk and connect the replacement.
Run vgcfgrestore to restore LVM configuration information to the added disk.
Run vgchange -a y to reactivate the volume group to which the disk belongs. Since the volume group is already currently active, no automatic synchronization occurs.
Now run vgsync to manually synchronize all the extents in the volume group.