July 17, 2006
The LVM Advantage: Migrating Data
The Linux Logical Volume Management system (LVM) is a wonderful piece of technology. In a nutshell, you take disparate storage devices and place them under the control of LVM in a container called a Volume Group (or VG for short). You can then create Logical Volumes (or LV for short) in the VG, which can be formatted, mounted and used, just like any block device. The LVM code will manage everything.
Sometimes, when I'm first introducing students to LVM, they ask me something like, "Why would I ever want to do this? Add an extra layer of complexity and overhead, just to get some fancy looking partitions? No thanks; I'll stick to plain partitions."
First of all, LVM is very efficient, introducing almost zero overhead for most normal operations. Yes, there is a little, but it can be difficult to meassue, let alone notice. Second, there are many irritating storage management tasks that are downright child's play to accomplish with LVM in place but quite difficult with just plain partitions.
For example, you can add additional storage and move existing data into it, clearing out old devices without having to shut anything down. Let your programs and users continue to access and manipulate the data. In most cases, they won't even be able to tell your moving stuff to a new hard drive. Of course, if your system doesn't support hot-plugging of new hard drives, you will have to power down to add the drive physically to the system and again to remove (an) old one(s).
This is the case with my home file server, so I shut it down to add the new drive (read my recent posts to learn about that adventure). But while I'm moving the data, the entire system is running and available as normal. I can read and write data all over my LVM LVs without worry.
Here's the sequence of events:
- Insert new drive into the system; If you couldn't hot-swap, boot it back up
- Get to a
rootshell and create an LVM partition on the new drive (that's partition type 0x8E)
/dev/sdb1to the correct device, of course)
- Add the new storage to the VG of your choice (they can only be in one VG) by running
vgextend vg0 /dev/sdb1(change
vg0to an appropriate value)
- Enjoy your expanded storage!
OK. In my case, I want to move everything from all old drives to the one new one, so I'll use a special form of the
pvmove command. If I wasn't going to be removing all of the existing devices and was keeping more than just the one drive, I would not specify the second parameter to
pvmove. Here is the output from my running of them: :
time pvmove /dev/hde2 /dev/sda1/dev/hde2: Moved: 0.3% /dev/hde2: Moved: 0.7% /dev/hde2: Moved: 1.0% /dev/hde2: Moved: 1.4% /dev/hde2: Moved: 1.7% /dev/hde2: Moved: 2.1% . . . snip . . . real 72m23.484s user 0m4.812s sys 0m4.116s #
time pvmove /dev/hdg1 /dev/sda1. . . snip . . . /dev/hdg1: Moved: 44.2% /dev/hdg1: Moved: 44.6% /dev/hdg1: Moved: 45.0% /dev/hdg1: read failed after 0 of 1024 at 4096: Input/output error /dev/hdg1: read failed after 0 of 2048 at 0: Input/output error Failed to read existing physical volume '/dev/hdg1' Physical volume /dev/hdg1 not found ABORTING: Can't reread PV /dev/hdg1 ABORTING: Can't reread VG for /dev/hdg1
I'll talk about the errors in a minute.
As you can see, it can take a while to move that much data, so running them in a
screen session is recommended.
With the two parameter form of
pvmove, I'm telling LVM to migrate data off of the old drives only moving it to the new one. Again, except in these kinds of very narrow circumstances, it's better to only tell it the device to move off of and let LVM decide where to move the data to (it'll make things much more efficient that way).
OK, so there were some errors running
pvmove and it bailed out. Let's see what wer're dealing with:
# pvdisplay /dev/hdc1 /dev/hdc1: read failed after 0 of 2048 at 0: Input/output error No physical volume label read from /dev/hdc1 Failed to read physical volume "/dev/hdc1"
Uh, oh. Looks like I'm going to have a little difficulty getting this one fixed. I ran a few other commands looking at data LVs that are already mounted and it seems that the corrupted portion of the disk is not affecting any existing data. I did have a
/dev/data/up2date LV which I deleted during the installation of SUSE Linux 10.1 (I won't need that Red Hat specific item). Prior to the installation, it was not mountable and
fsck couldn't recover it because of drive problems. This was one of the reasons that I decided to get the new drive in the first place.
Hopefully, I'll find a way to verify whether or not there is any other data on that drive that I need to worry about. Bat as it's getting late, I think I'm about to head off to bed.
Just to finish things off, run
vgreduce /dev/hde2 (in my case). At this point, I can remove the 120GB drive and once I fix up and finish migrating from the 45GB drive, I'll be done. In my case, that means another powering down (so I'll wait till I can pull them both), but it is a home file server, so I don't mind.
It's very nice to have access to all of my data while I'm upgrading my storage system. There isn't even a performance penalty for other I/O, as the LVM code will submit the
pvmove I/O operations as low priority to the kernel's I/O scheduler.
Posted by lamontp at July 17, 2006 11:15 PM