after the lvremove command stopped responding, the guest domain was still running, but anything which had to write to the drive would hang so it was effectively stopped. after a reboot, the other guest domains were restored, but the domain that had its drive snapshotted did not come back up. i updated the lvm to version 2.02.02 and lvremove successfully removed the snapshot. i tried starting the domain again, but the logical volume did not mount.
i tried using fsck to repair the file system on the logical volume, but fsck could not find it. i tried using an alternate master block, but that didn't work either. eventually, lvscan showed me that the logical volume was not active. the previous lvremove hang must have caused that logical volume and its snapshot to become inactive. i used lvchange to reactivate it, then i was able to fsck it and start the guest domain back up.
__________
here are some commands i used to create the snapshot originally,
some problems i ran into and how i solved them
#when i first tried to create the snapshot, i didn't have the module enabled
localhost:~# lvcreate -L20G -s -n tempsnap /dev/vg01/vps6main
snapshot: Required device-mapper target(s) not detected in your kernel
#enabled the lvm snapshot module
localhost:~# modprobe dm_snapshot
#created the snapshot (note you can't use the name "snapshot" in the snapshot name
localhost:~# lvcreate -L20G -s -n tempsnap /dev/vg01/vps6main
Logical volume "tempsnap" created
#after creating the snapshot, i created a tar backup of it, then tried to remove it
localhost:~# lvremove /dev/vg01/tempsnap
Do you really want to remove active logical volume "tempsnap"? [y/n]: y
#here's where the hang occurred
#after reboot i updated lvm2 for sarge
localhost:~# lvs
LV VG Attr LSize Origin Snap% Move Copy%
tempsnap vg01 swi--- 20,00G vps6main
vps6main vg01 owi--- 20,00G
vps6swap vg01 -wi-a- 800,00M
localhost:~# lvremove /dev/vg01/tempsnap
Logical volume "tempsnap" successfully removed
localhost:~# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
vps6main vg01 -wi--- 20,00G
vps6swap vg01 -wi-a- 800,00M
#mount can't find the logical volume because it's not active
localhost:~# mount /dev/vg01/vps6main /mnt
mount: you must specify the filesystem type
#same with e2fsck, the lv is not active
localhost:~# e2fsck /dev/vg01/vps6main
e2fsck 1.37 (21-Mar-2005)
e2fsck: No such file or directory while trying to open /dev/vg01/vps6main
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
#lvdisplay showed LV Status as NOT available
localhost:~# lvdisplay /dev/vg01/vps6main
--- Logical volume ---
LV Name /dev/vg01/vps6main
VG Name vg01
LV UUID JkasfA-60JG-MFVt-Ys3L-fWyI-4q01-23yFyY
LV Write Access read/write
LV Status NOT available
LV Size 20,00 GB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors 0
#i tried a dry run mkfs to see if it could see the logical volume
localhost:~# mkfs.ext3 -n /dev/vg01/vps6main
mke2fs 1.37 (21-Mar-2005)
Could not stat /dev/vg01/vps6main --- No such file or directory
The device apparently does not exist; did you specify it correctly?
#lvscan shows it's inactive
localhost:~# lvscan
ACTIVE '/dev/vg01/vps6swap' [800,00 MB] inherit
inactive '/dev/vg01/vps6main' [20,00 GB] inherit
#man lvchange says i can activate it with -a y
localhost:~# lvchange -a y /dev/vg01/vps6main
#now vps6main is active
localhost:~# lvscan
ACTIVE '/dev/vg01/vps6swap' [800,00 MB] inherit
ACTIVE '/dev/vg01/vps6main' [20,00 GB] inherit
#now e2fsck can check it, then i could mount it and/or start the guest domain
localhost:~# e2fsck /dev/vg01/vps6main
e2fsck 1.37 (21-Mar-2005)
/dev/vg01/vps6main: recovering journal
/dev/vg01/vps6main: clean, 29724/2621440 files, 281179/5242880 blocks
back
up