内容简介:公司云平台使用的是XenServer 5.5,版本比较老了。最近几天因为机房改造,导致云环境断电,重启之后发现有2台机器无法ping到,所以再次重启,登录修复网卡,最后发现无法用XenCenter找到Local Storage,部分主机的该栏目内容为空,导致一些重要的虚拟机无法启动。于是远程登录机器,运行lvscan命令查看LVM的逻辑卷情况,发现结果如下:再用 lvdisplay 查看,发现只有一个逻辑卷,原本的几个逻辑卷全部不见了。于是断定LVM的磁盘丢失了,再去查看/etc/lvm/backup目录
1.现象
公司云平台使用的是XenServer 5.5,版本比较老了。最近几天因为机房改造,导致云环境断电,重启之后发现有2台机器无法ping到,所以再次重启,登录修复网卡,最后发现无法用XenCenter找到Local Storage,部分主机的该栏目内容为空,导致一些重要的虚拟机无法启动。
2.诊断
于是远程登录机器,运行lvscan命令查看LVM的逻辑卷情况,发现结果如下:
[root@host202 backup]# lvscan inactive '/dev/VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa/MGT' [4.00 MB] inherit
再用 lvdisplay 查看,发现只有一个逻辑卷,原本的几个逻辑卷全部不见了。于是断定LVM的磁盘丢失了,再去查看/etc/lvm/backup目录,发现有2个备份文件
[root@host202 backup]# ls -al /etc/lvm/backup/
total 16 drwx------ 2 root root 4096 Jan 22 16:48 . drwxr-xr-x 5 root root 4096 Sep 19 2011 .. -rw------- 1 root root 4072 Jan 21 15:24 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 -rw------- 1 root root 1259 Jan 22 16:48 VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa
依次cat一下两个文件的内容,发现大的那个文件是我需要的。于是准备着手恢复。
3.实验
恢复的方案是先创建一个实验环境,在VMWare下安装一个centos,创建几个逻辑卷,备份,然后全部删除,再创建一个逻辑卷,覆盖掉原数据,然后再开始恢复,看看是否能够成功。整个实验环境的配置情况是添加2个磁盘,一个是系统盘10G大小,另一个是2G的试验盘,centos安装在10G的盘上。
3.1 实验初始化
初始化实验环境,使用的命令如下:
pvcreate /dev/sdb1 ## 创建lvm的物理卷 vgcreate lvmfix /dev/sdb1 ## 在物理卷上创建卷组,也就是将不同的磁盘组合成概念上的单个磁盘 lvcreate -n fix01-10M -L 10M lvmfix ## 创建10M大小的逻辑卷,linux会把它当磁盘分区那么用 lvcreate -n fix01-101M -L 101M lvmfix ## 连续创建多个,这个101M lvcreate -n fix01-502M -L 502M lvmfix ## 创建502M的空间
格式化并放进一些有内容的文件,以下步骤要每个逻辑卷都做一次
mkfs -j /dev/lvmfix/fix01-10M ## mkdir /root/f10m mount /dev/lvmfix/fix01-10m /root/f10m echo "abc 10m hello" > /root/f10m/f10m-readme
其余的两个目录同样处理之后,备份一下lvm的分区情况
vgcfgbackup -f %s-20140124
开始模拟故障情况,首先彻底删除卷,然后创建一个新的卷组去覆盖部分数据
vgremove lvmfix ## 删除卷组,如果询问,则一路yes到底 pvremove /dev/sdb1 ## 连物理卷都删掉
重新创建一个临时的,覆盖掉原来的数据
pvcreate /dev/sdb1 vgcreate vg-fix-2 /dev/sdb1 lvcreate -n wrong-op -L 1G vg-fix-2 vgcfgbackup -f %s-after-wrong-op ## 备份一下破坏的卷信息,其实也可以不备份
3.2 恢复过程
此时准备开始恢复,先删除临时创建的内容
vgremove vg-fix-2 pvremove /dev/sdb1
然后检查早先备份的分区情况 /root/lvmfix-20140124,提取pv的uuid和vg的uuid
grep "id =" /root/lvmfix-20140124
第二行的是pv的uuid,记下来,这里用{pvuuid}代替。 然后开始创建,用命令创建一个相同uuid的物理卷
pvcreate --restorefile /root/lvmfix-20140124 --uuid {pvuuid} /dev/sdb1 ## 要注意,低版本的lvm不要下划线部分
然后恢复卷组
vgcfgrestore --test --file /root/lvmfix-20140124 lvmfix ## 恢复初始的卷组情况,先测试一下 vgcfgrestore --file /root/lvmfix-20140124 lvmfix ## 然后再去掉--test参数进行实际操作 lvscan ##执行完毕之后,看看物理卷是否恢复原样 vgchange -ay lvmfix ## 记得要激活一下,使之状态为active mount -t ext3 /dev/lvmfix/fix01-10m /root/f10m ## 重新 mount,要进行磁盘检查
扫描的结果报告mount: wrong fs type, bad option, bad superblock错误,看来磁盘已经损坏了,需要修复。
e2fsck /dev/lvmfix/fix01-10m ## 修复磁盘
记得每一个逻辑分区都需要mount并扫描一下,有错误就修复。不过根据经验,一般只会前一个或两个分区损坏,越后的分区基本都完好。不过要注意,这里的修复方式不适用于对XenServer的VHD修复。
4.实际操作
实验成功之后,需要对损坏的主机进行实际操作,过程中出现了很多其他异常情况,让人感觉非常艰苦,套用搜索资料过程中的一个老外的网名:I Hate Xen!
4.1 第一阶段,清除
首先寻找物理卷的uuid
[root@host202 backup]# head -50 /etc/lvm/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 # Generated by LVM2 version 2.02.56(1)-RHEL5 (2010-04-22): Tue Jan 21 15:24:17 2014 contents = "Text Format Volume Group" version = 1
description = "Created *after* executing '/usr/sbin/pvresize /dev/disk/by-id/scsi-3600605b00283629017a39a1525dc3ec8-part3'"
creation_host = "host202" # Linux host202 2.6.32.12-0.7.1.xs1.1.0.327.170596xen #1 SMP Fri Sep 16 17:45:00 EDT 2011 i686 creation_time = 1390289057 # Tue Jan 21 15:24:17 2014
VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 { ## 要记住这个卷组的编号,接下来要创建
id = "vcm98B-U8Ii-rB2z-Z0hP-0svE-DiM7-lsHXSe"
seqno = 18
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv" ## 这个就是我们要找的物理卷编号pvuuid
device = "/dev/sda3" # Hint only
status = ["ALLOCATABLE"] flags = []
查看一下实际卷组的编号,准备删除XenServer自动恢复时候创建的卷。
[root@host202 backup]# vgscan
Reading all physical volumes. This may take a while... Found volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" using metadata type lvm2
依照实验步骤,删除无用的卷组
[root@host202 backup]# vgremove VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa Do you really want to remove volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" containing 1 logical volumes? [y/n]: y Logical volume "MGT" successfully removed Volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" successfully removed
看一下物理卷,然后准备删除,要注意下面的黑体字,说明物理卷是空的,但是不要害怕,只要没有往这个物理卷里边写过东西,原先的内容就还都可以恢复。删的时候要注意以下命令的黑体下划线部分,你的磁盘分区位置和我机器上的是不同的。
[root@host202 backup]# pvscan
PV /dev/sda3 lvm2 [456.73 GB]
Total: 1 [456.73 GB] / in use: 0 [0 ] / in no VG: 1 [456.73 GB]
删掉这个物理卷
[root@host202 backup]# pvremove /dev/sda3 Labels on physical volume "/dev/sda3" successfully wiped
4.2 第二阶段,恢复lvm
根据实验步骤,我们重新创建名字和uuid一样的物理卷,这里黑体部分就是记下来的pvuuid。另外千万不要忘记带下划线的/dev/sda3部分,我的机器和读者你的机器是不同的,看好上一步pvremove的是哪一个分区,建错了就什么都没了哦
[root@host202 backup]# pvcreate --restorefile ./VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 -uuid OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv /dev/sda3
Can only set uuid on one volume at once
Run `pvcreate --help' for more information.
[root@host202 backup]# pvcreate --uuid OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv /dev/sda3
Physical volume "/dev/sda3" successfully created
然后开始恢复磁盘卷,记住磁盘卷的名字是从第一阶段的第一步里得来的。先测试,再实际写入。
[root@host202 backup]# vgcfgrestore --test --file VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
Test mode: Metadata will NOT be updated.
Restored volume group VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
实际写入并恢复lvm分区信息,再次提醒磁盘物理卷名字
[root@host202 backup]# vgcfgrestore --file VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
Restored volume group VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
看看战果如何,还是蛮喜人的,先看逻辑卷的情况。大家注意inactive的状态
[root@host202 backup]# lvscan inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT' [4.00 MB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0' [8.02 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5ceec995-26ec-4986-931f-3d1804807650' [192.38 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-3a3a681d-c1c2-4636-a656-f9901343d33d' [92.19 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a69ae385-924c-42e7-af38-2e38ffeaf851' [8.02 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a3e49a56-2326-44d4-a136-3e4a28beded7' [6.02 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-2b1a8fca-90d7-4ff4-b12a-aa2c8b589ba0' [6.02 GB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5e734d3c-2669-432d-8d38-4099d320375d' [8.00 MB] inherit inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6' [6.02 GB] inherit
卷组情况也看来不错
[root@host202 backup]# vgscan Reading all physical volumes. This may take a while... Found volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923" using metadata type lvm2
物理卷的情况看起来也很喜人,而且看看下划线黑体字部分,我们的磁盘空间显然已经回来了。
[root@host202 ~]# pvscan
PV /dev/sda3 VG VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 lvm2 [456.71 GB / 138.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]
依据实验过程,激活整个卷组
[root@host202 backup]# vgchange -ay VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 9 logical volume(s) in volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923" now active [root@host202 backup]# lvscan ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT' [4.00 MB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0' [8.02 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5ceec995-26ec-4986-931f-3d1804807650' [192.38 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-3a3a681d-c1c2-4636-a656-f9901343d33d' [92.19 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a69ae385-924c-42e7-af38-2e38ffeaf851' [8.02 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a3e49a56-2326-44d4-a136-3e4a28beded7' [6.02 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-2b1a8fca-90d7-4ff4-b12a-aa2c8b589ba0' [6.02 GB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5e734d3c-2669-432d-8d38-4099d320375d' [8.00 MB] inherit ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6' [6.02 GB] inherit
4.3 第三阶段,磁盘检查
这里的磁盘检查和实验环境的完全不一样,因为XenServer使用了微软的VHD格式,所以千万不能用e2fsck来修复,否则数据永久丢失!
我们使用专用的修复 工具 来进行修复,如果不想麻烦的话,就手工一个个敲,如果量大的话,可以到参考文献中找检查脚本。不过有TAB键自动完善的功能,10个左右就直接敲命令吧
[root@host202 backup]# vhd-util check -n /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6
<em id="__mceDel">/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6 is valid</em>
vhd-util check -n /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT
/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT appears invalid; dumping headers
VHD Footer Summary: ------------------- Cookie : XSSMc Features : (0x01000000) File format version : Major: 15423, Minor: 30829 Data offset : 77913575334348 Timestamp : Tue Jul 4 22:41:33 1922 Creator Application : '.0" ' Creator version : Major: 16190, Minor: 2620 Creator OS : Unknown! Original disk size : 7997602797382 MB (83860943508677 Bytes) Current disk size : 634683573958 MB (66551396324759 Bytes) Geometry : Cyl: 29801, Hds: 111, Sctrs: 110 : = 177671 MB (186301547520 Bytes) Disk type : Unknown type!
Checksum : 0x74686963|0xffffe4c8 (Bad!) UUID : 6b0a093c-2f61-6c6c-6f63-6174696f6e3e Saved state : Yes Hidden : 60
VHD Header Summary: ------------------- Cookie : Data offset (unusd) : 0 Table offset : 0 Header version : 0x00000000 Max BAT size : 0 Block size : 0 (0 MB) Parent name : Parent UUID : 00000000-0000-0000-0000-000000000000 Parent timestamp : Sat Jan 1 00:00:00 2000 Checksum : 0x0|0xffffffff (Bad!)
扫描结果不容乐观,我的机器上前两个分区都坏了,一个是MGT,另外一个是数据盘,大约8G大小。进一步检查,看看分区表是否还在
[root@host202 ~]# fdisk -l /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT Disk /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT: 4 MB, 4194304 bytes 255 heads, 63 sectors/track, 0 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT <strong>doesn't contain a valid partition table</strong>
分区表已经损坏!在搜索资料过程中,发现MGT是VDI插入时自动创建的,因此决定重建MGT,并且抛弃8G的那个盘,理由是我发现有很多8G和6G大小的VHD,怀疑这些都是自动生成的快照。
4.4 第四阶段,重新识别VHD
重新识别磁盘的方案很简单,就是把本地存储库忘记(forget)掉,然后再重新介绍(introduce)一次,对于无法识别的错误VHD,我们要把它改名,Xen会扫描VHD-*形式的磁盘镜像名称,我们简单修改成old-VHD-*即可跳过扫描。
首先我们要识别一下信息,用pvscan找到存储库的uuid,下方黑色画线部分要记下来
[root@host202 ~]# pvscan
PV /dev/sda3 VG VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 lvm2 [456.71 GB / 138.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]
然后找到本机磁盘 /dev/sda3对应的id,再次强调,我这里是/dev/sda3,但是读者你的机器不一定是这个磁盘,不能搞错。
ls -al /dev/disk/by-id
再找到本机的uuid,用xe命令。黑体部分的就是主机uuid了
[root@host202 ~]# xe host-list
uuid ( RO) : 0bb221af-3f0b-44ff-9dba-2564fd7b8a11
name-label ( RW): host202
name-description ( RW): Default install of XenServer
然后看看本机的SR名字是否正确,这里显然斜体部分内容是错误的,所以需要重建SR
[root@host202 ~]# xe sr-list type=lvm
uuid ( RO) : 4883c621-cad8-e6db-7d17-b33ac4eb1aaa
name-label ( RW): Local Storage
name-description ( RW):
host ( RO): host202
type ( RO): lvm
content-type ( RO):
重建的思路是先把SR相关的VDI做一个unplug操作,然后forget掉SR,再重新创建一个名字正确的SR,插入VDI之后会自动生成新的MGT,再让XenServer自己扫描出剩余的好的VHD。
先要找出SR关联的pbd,
xe pbd-list sr-uuid=4883c621-cad8-e6db-7d17-b33ac4eb1aaa
然后忘记SR
xe sr-forget uuid=4883c621-cad8-e6db-7d17-b33ac4eb1aaa
再开始创建SR,就是这一步折腾我很久,
xe sr-create host-uuid=0bb221af-3f0b-44ff-9dba-2564fd7b8a11 content-type=user name-label="Local Storage" shared=false device-config:device=/dev/disk/by-id/scsi-3600605b00283629017a39a1525dc3ec8-part3 type=lvm
[root@host202 ~]# lvrename /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0 /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/old-VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0
Renamed "VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0" to "old-VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0" in volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923"
[root@host202 ~]# xe sr-scan uuid=0b3d830f-b140-3fdf-f384-7c56f1e72923
[root@host204 backup]# pvscan
PV /dev/sda3 VG VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23 lvm2 [456.71 GB / 135.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]
[root@host204 backup]# xe pbd-list sr-uuid=df81f6b1-22ae-3fad-8f24-7654baa4f385
uuid ( RO) : 4a8f5318-98b0-f932-2f98-950198ab6e28
host-uuid ( RO): 78c36865-1129-45f1-98ae-e0428625652e
sr-uuid ( RO): df81f6b1-22ae-3fad-8f24-7654baa4f385
device-config (MRO): device: /dev/disk/by-id/scsi-3600605b00281e90017a3c8ab1eaa9739-part3
currently-attached ( RO): true
[root@host204 backup]# xe host-list
uuid ( RO) : 92d731ad-3936-4cfd-8584-ecc16b425114
name-label ( RW): host205
name-description ( RW): avm
uuid ( RO) : 78c36865-1129-45f1-98ae-e0428625652e
name-label ( RW): host204
name-description ( RW): Default install of XenServer
uuid ( RO) : 0bb221af-3f0b-44ff-9dba-2564fd7b8a11
name-label ( RW): host202
name-description ( RW): Default install of XenServer
[root@host204 backup]# xe pbd-unplug uuid=4a8f5318-98b0-f932-2f98-950198ab6e28
[root@host204 backup]# xe sr-list host=host204
uuid ( RO) : df81f6b1-22ae-3fad-8f24-7654baa4f385
name-label ( RW): Local storage
name-description ( RW):
host ( RO): host204
type ( RO): lvm
content-type ( RO): user
uuid ( RO) : 04509a62-85b7-b5b0-95fe-6fcbfb14323f
name-label ( RW): DVD drives
name-description ( RW): Physical DVD drives
host ( RO): host204
type ( RO): udev
content-type ( RO): iso
uuid ( RO) : c44f02e6-5717-211a-eed0-f2ef74ee6e0d
name-label ( RW): Removable storage
name-description ( RW):
host ( RO): host204
type ( RO): udev
content-type ( RO): disk
[root@host204 backup]# xe sr-forget uuid=df81f6b1-22ae-3fad-8f24-7654baa4f385
[root@host204 backup]# xe sr-introduce uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23 type=lvm name-label="Local Storage"
844f33b1-36ce-a8a1-699f-6e53c2ca3a23
[root@host204 backup]# lvscan
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/MGT' [4.00 MB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e5163350-7a65-4424-9e98-91ed74b1771b' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-3ad95f97-cc0a-4033-b832-ceeaac19ddf6' [192.38 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e163f2b5-0d1a-4e2a-8bc9-0d9ab467a01a' [50.11 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-74bc6f50-c8a4-4f50-af0f-db463d2d0cad' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-c6ad7774-1419-49aa-a984-0348e4848683' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-52429a66-a0bf-410a-8858-f9e45c1e700a' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-bb78fd95-7746-46a6-ab6a-fab578b7d64e' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-5f27bce7-6cf5-4cce-a8d6-c77fbfa51774' [45.09 GB] inherit
[root@host204 backup]# lvrename /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/MGT /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/oldMGT
Renamed "MGT" to "oldMGT" in volume group "VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23"
[root@host204 backup]# lvrename /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e5163350-7a65-4424-9e98-91ed74b1771b /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/bad-VHD-e5163350-7a65-4424-9e98-91ed74b1771b
Renamed "VHD-e5163350-7a65-4424-9e98-91ed74b1771b" to "bad-VHD-e5163350-7a65-4424-9e98-91ed74b1771b" in volume group "VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23"
[root@host204 backup]# xe pbd-create sr-uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23 host-uuid=78c36865-1129-45f1-98ae-e0428625652e device-config:device=/dev/disk/by-id/scsi-3600605b00281e90017a3c8ab1eaa9739-part3
e45ba036-c59e-e3e3-d8b5-19be0cbfe336
[root@host204 backup]# xe pbd-plug uuid=e45ba036-c59e-e3e3-d8b5-19be0cbfe336
[root@host204 backup]# xe sr-scan uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23
参考文献
- 硬盘ext2/3文件系统superblock损坏修复试验 http://blog.sina.com.cn/s/blog_4b51d4690100ndhm.html
- Recovering a Lost LVM Volume Disk http://www.novell.com/coolsolutions/appnote/19386.html
- XenServer Databaser Tool http://support.citrix.com/article/CTX121564
- VDI Metadata Corruption http://discussions.citrix.com/topic/300932-vdi-metadata-corruption/
- XenServer Metadata Corrupt Workaround http://virtualdesktopninja.com/VDINinja/2012/xenserver-metadata-corrupt-workaround/
- http://www.ganomi.com/wiki/index.php?title=Check_for_consistency_in_the_VHD_metadata
- http://blog.adamsbros.org/2009/05/30/recover-lvm-volume-groups-and-logical-volumes-without-backups/
- http://discussions.citrix.com/topic/282493-vdi-is-not-available-xenserver-56fp1/page-2
- http://rritw.com/a/bianchengyuyan/C__/20130814/411428.html
- http://support.citrix.com/article/CTX136342
- http://help.31dns.net/index.php/category/xenserver/
- http://golrizs.com/2012/01/how-to-reinstall-xenserver-and-preserve-virtual-machines-on-a-local-disk/
- http://www.xenme.com/1796
- http://blogs.citrix.com/2013/06/27/openstack-xenserver-type-image-to-volume/
- http://natesbox.com/blog/data-recovery-finding-vhd-files/
- http://itknowledgeexchange.techtarget.com/linux-lotus-domino/recovering-files-from-an-lvm-or-ext3-partition-with-testdisk/
- http://zhangyu.blog.51cto.com/197148/1095637
- 详解MBR分区结构以及GPT分区结构 http://dengqi.blog.51cto.com/5685776/1348951
- FAT32文件系统详解 http://dengqi.blog.51cto.com/5685776/1349327
- 分析NTFS文件系统内部结构 http://dengqi.blog.51cto.com/5685776/1351300
- NTFS文件系统数据恢复-解析分区结构 http://blog.csdn.net/jha334201553/article/details/9088921
- Troubleshooting Disks and File Systems http://technet.microsoft.com/en-us/library/bb457122.aspx
- http://support.microsoft.com/kb/234048
- Logical Disk Management http://www.ntfs.com/ldm.htm
- https://stackoverflow.com/questions/8427372/windows-spanned-disks-ldm-restoration-with-linux
- http://uranus.chrysocome.net/explore2fs/es2fs.htm
- http://blog.csdn.net/ljianhui/article/details/8604140
- https://superuser.com/questions/693045/how-to-recover-partitions-from-an-external-hard-disk
- http://www.r-tt.com/Articles/External_Disk_Recovery/
- http://major.io/2010/12/14/mounting-a-raw-partition-file-made-with-dd-or-dd_rescue-in-linux/
mysql Cannot find or open table x/x 及解决办法
http://blog.csdn.net/xiangliangyu/article/details/8450765
mysql通过idb文件恢复数据
http://blog.csdn.net/xiangliangyu/article/details/8450812
Can I find out what version of MySQL from the data files?
https://dba.stackexchange.com/questions/41338/can-i-find-out-what-version-of-mysql-from-the-data-files
Can I find mysql version from data files, need for data restoration
https://stackoverflow.com/questions/16324569/can-i-find-mysql-version-from-data-files-need-for-data-restoration
How to Recover Data using the InnoDB Recovery Tool
http://www.chriscalender.com/?p=49
MySQL 不停服务来启用 innodb_file_per_table
http://www.php-oa.com/2012/04/20/mysql-innodb_file_per_table.html
工具
https://github.com/jaylevitt/recover_innodb_tables
https://launchpad.net/percona-data-recovery-tool-for-innodb
http://www.percona.com/docs/wiki/innodb-data-recovery-tool:mysql-data-recovery:start
以上所述就是小编给大家介绍的《XenServer 5.5 断电重启虚拟机磁盘丢失的修复》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- 一次断电造成ingress"问题" 原 荐
- 阿里安全专家预警智能电池存隐患 被攻击可致断电起火
- Golang实现平滑重启(优雅重启)
- SOFAMosn 无损重启/升级
- nginx-平滑重启
- Unbuntu 自动重启MySQL
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。