歡迎您光臨本站 註冊首頁

linux軟raid和lvm

←手機掃碼閱讀     火星人 @ 2014-03-09 , reply:0

RAID 硬體RAID: Raid卡 陣列 軟體RAID 1、查看系統中是否支持你的硬體RAID設備 [root@qiuri ~]# dmraid -l asr : Adaptec HostRAID ASR (0,1,10) ddf1 : SNIA DDF1 (0,1,4,5,linear) hpt37x : Highpoint HPT37X (S,0,1,10,01) hpt45x : Highpoint HPT45X (S,0,1,10) isw : 英特爾 Software RAID (0,1) jmicron : JMicron ATARAID (S,0,1) lsi : LSI Logic MegaRAID (0,1,10) nvidia : NVidia RAID (S,0,1,10,5) pdc : Promise FastTrack (S,0,1,10) sil : Silicon Image(tm) Medley(tm) (0,1,10) via : VIA Software RAID (S,0,1,10) dos : DOS partitions on SW RAIDs nvidia:設備代碼 NVidia RAID:名稱 (S,0,1,10,5)支持RAID等級 2、設置硬體RAID 設備 大多數使用BIOS去設置 3、啟用RAID設備 [root@qiuri ~]# dmraid -a y 驗證是否啟用 [root@qiuri ~]# ls /dev/mapper/ control sil****** 查看RAID設備: [root@qiuri ~]# dmraid –r 1、 raid名稱 2、 設備名稱文件名 3、 Raid等級 4、 狀態 ok 5、 扇區數量 查看RAID配置 [root@qiuri ~]# dmraid –s 停用RAID: [root@qiuri ~]# dmraid -a n

配置軟體RAID 一、產生組織單元 1) 硬碟或分區 分區類型改為“fd” 2) rpm –qa |grep mdadm mdadm [mode] <raiddevice> [options] <component-devices> 二、配置軟體RAID [root@qiuri ~]# mdadm --create /dev/md0 --level raid1 --raid-devices 2 /dev/hdb1 /dev/hdb2 mdadm: array /dev/md0 started. [root@qiuri ~]# 驗證:查看軟體RAID詳細使用情況: [root@qiuri ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb2[1] hdb1[0] 1953664 blocks [2/2] [UU] unused devices: <none> [root@qiuri ~]# 三、設置RAID的配置文件/etc/mdadm.conf [root@qiuri ~]# vi /etc/mdadm.conf DEVICE /dev/hdb1 /dev/hdb2 ARRAY /dev/md0 DEVICES=/dev/hdb1,/dev/hdb2 四、創建文件系統 [root@qiuri ~]# mkfs.ext3 /dev/md0 五、掛在文件系統 [root@qiuri ~]# mkdir /mnt/raid [root@qiuri ~]# mount /dev/md0 /mnt/raid/ [root@qiuri ~]# df -h Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00 28G 4.4G 23G 17% / /dev/hda1 99M 12M 83M 13% /boot tmpfs 163M 0 163M 0% /dev/shm df: `/media/RHEL_5.2 i386 DVD': No such file or directory /dev/hdc 2.9G 2.9G 0 100% /media /dev/md0 1.9G 35M 1.8G 2% /mnt/raid 管理軟體磁碟陣列 [root@qiuri ~]# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Thu Aug 20 03:39:18 2009 Raid Level : raid1 #raid等級 Array Size : 1953664 (1908.20 MiB 2000.55 MB) Used Dev Size : 1953664 (1908.20 MiB 2000.55 MB) Raid Devices : 2 #磁碟數量 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Aug 20 03:45:50 2009

State : clean #md0當前配置
Active Devices : 2 #啟用 Working Devices : 2 Failed Devices : 0 #故障 Spare Devices : 0 UUID : d2920143:d29ffec1:ad2fcd18:813f7a5e Events : 0.2 Number Major Minor RaidDevice State #每一組詳細信息 0 3 65 0 active sync /dev/hdb1 1 3 66 1 active sync /dev/hdb2 模擬故障 [root@qiuri ~]# mdadm /dev/md0 --set-faulty /dev/hdb2 mdadm: set /dev/hdb2 faulty in /dev/md0 驗證: [root@qiuri ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb2[2](F) hdb1[0] 1953664 blocks [2/1] [U_] #發現少了一個U,代表壞了一塊硬碟 unused devices: <none> [root@qiuri ~]# mdadm --detail /dev/md0 |tail -5 Number Major Minor RaidDevice State

0 3 65 0 active sync /dev/hdb1
1 0 0 1 removed 2 3 66 - faulty spare /dev/hdb2 將壞掉的硬碟移除(卸載) [root@qiuri ~]# mdadm /dev/md0 --remove /dev/hdb2 mdadm: hot removed /dev/hdb2 驗證移除: [root@qiuri ~]# mdadm --detail /dev/md0 |tail -5 Events : 0.6 Number Major Minor RaidDevice State 0 3 65 0 active sync /dev/hdb1 1 0 0 1 removed 將新的硬碟添加到RAID:(掛載) [root@qiuri ~]# mdadm /dev/md0 --add /dev/hdb2 mdadm: re-added /dev/hdb2 驗證: [root@qiuri ~]# mdadm --detail /dev/md0 |tail -5 Events : 0.6 Number Major Minor RaidDevice State 0 3 65 0 active sync /dev/hdb1 1 3 66 1 active sync /dev/hdb2 [root@qiuri ~]# 啟用與停用多重磁碟設備 停用RAID,注意我們當前raid是否掛在,如果掛載,需要先卸載,再停用 [root@qiuri ~]# umount /mnt/raid/ [root@qiuri ~]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@qiuri ~]# 驗證: [root@qiuri ~]# mdadm --detail /dev/md0 mdadm: md device /dev/md0 does not appear to be active. 啟用RAID,注意/etc/mdadem.conf是否配置. [root@qiuri ~]# mdadm --assemble --scan /dev/md0 mdadm: /dev/md0 has been started with 2 drives. [root@qiuri ~]# 監控多重磁碟設備 1) 配置郵件 2) 啟用mdmonitor服務 [root@qiuri ~]# service mdmonitor start 自動掛載: 在/etc/fstab中添加: /dev/md0

/mnt/raid ext3 defaults 0 0 讓RAID開機啟動.配置RIAD配置文件吧.默認名字為mdadm.conf,這個文件默認是不存在的,要自己建立.該配置文件存在的主要作用是系統啟動的時候能夠自動載入軟RAID,同時也方便日後管理. # mdadm --detail --scan > /etc/mdadm.conf 網上看的 圖形: LVM(邏輯卷管理) 物理卷:使用物理設備(分區、磁碟)pv 卷組:物理卷集合,使用一個目錄名稱表示 邏輯卷:將卷組劃分后,成為邏輯卷. 準備工作: 分區: 建立物理卷 [root@qiuri ~]# pvcreate /dev/hdd1 /dev/hdd2 /dev/hdd3 Physical volume "/dev/hdd1" successfully created Physical volume "/dev/hdd2" successfully created Physical volume "/dev/hdd3" successfully created [root@qiuri ~]# pvscan /dev/cdrom: open failed: Read-only file system Attempt to close device '/dev/cdrom' which is not open. PV /dev/hdd1 lvm2 [1.86 GB] PV /dev/hdd2 lvm2 [1.86 GB] PV /dev/hdd3 lvm2 [1.86 GB] Total: 4 [35.46 GB] / in use: 1 [29.88 GB] / in no VG: 3 [5.59 GB] 建立卷組: [root@qiuri ~]# vgcreate vg0 /dev/hdd1 /dev/hdd2 /dev/hdd3 Volume group "vg0" successfully created [root@qiuri ~]# [root@qiuri ~]# vgscan Reading all physical volumes. This may take a while... /dev/cdrom: open failed: Read-only file system Attempt to close device '/dev/cdrom' which is not open. Found volume group "vg0" using metadata type lvm2 建立邏輯卷: [root@qiuri ~]# lvcreate -n lv0 -L 1000M vg0 Logical volume "lv0" created [root@qiuri ~]# 格式化: [root@qiuri ~]# mkfs.ext3 /dev/vg0/lv0 [root@qiuri ~]# mkdir /mnt/lv0 [root@qiuri ~]# mount /dev/vg0/lv0 /mnt/lv0/ 查看: [root@qiuri ~]# pvdisplay /dev/hdd1 --- Physical volume --- PV Name

/dev/hdd1 VG Name vg0 PV Size 1.86 GB / not usable 3.96 MB Allocatable yes PE Size (KByte) 4096 Total PE 476 Free PE 226 Allocated PE 250 PV UUID Wkcb0g-VHaf-D372-qdj1-12x1-qxva-OvyXCp [root@qiuri ~]# vgdisplay --- Volume group --- VG Name vg0 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 3 Act PV 3 VG Size 5.58 GB PE Size 4.00 MB Total PE 1428 Alloc PE / Size 250 / 1000.00 MB Free PE / Size 1178 / 4.60 GB VG UUID Um5lbv-RknO-OPwy-mYyl-2FGn-dBK2-v0zdCJ [root@qiuri ~]# [root@qiuri ~]# lvdisplay --- Logical volume --- LV Name /dev/vg0/lv0 VG Name vg0 LV UUID J2XIlI-Nag2-wwG0-l2dq-wNOZ-AgIK-9pNQ6O LV Write Access read/write LV Status available # open 1 LV Size 1000.00 MB Current LE 250 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2

--- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID EIHgaM-BY5z-ydzD-G01u-Tu3b-Dxre-ayZsp5 LV Write Access read/write LV Status available # open 1 LV Size 28.72 GB Current LE 919 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID 9AsYUt-qP3d-nbYp-KhZX-oWYb-Kitm-KqkEje LV Write Access read/write LV Status available # open 1 LV Size 1.16 GB Current LE 37 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 [root@qiuri ~]# 看到這樣錯誤:(參考:硬碟分區章節) The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. 解決: [root@qiuri ~]# partprobe /dev/hdd 查看原卷組容量: [root@qiuri ~]# vgdisplay vg0 |grep 'VG Size' VG Size 5.58 GB 驗證/dev/hdd4是否是PV: [root@qiuri ~]# pvdisplay /dev/hdd4 No physical volume label read from /dev/hdd4 Failed to read physical volume "/dev/hdd4" 將/dev/hdd4創建為PV: [root@qiuri ~]# pvcreate /dev/hdd4 Physical volume "/dev/hdd4" successfully created 再次驗證: [root@qiuri ~]# pvdisplay /dev/hdd4 "/dev/hdd4" is a new physical volume of "2.41 GB" --- NEW Physical volume --- PV Name

/dev/hdd4 VG Name PV Size 2.41 GB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID LsnML0-ny5i-rEGK-7xq3-LU2Y-Yr0i-sjrzAG [root@qiuri ~]# 將/dev/hdd4的PV加入現有卷組vg0中 [root@qiuri ~]# vgextend vg0 /dev/hdd4 /dev/cdrom: open failed: Read-only file system /dev/cdrom: open failed: Read-only file system Attempt to close device '/dev/cdrom' which is not open. Volume group "vg0" successfully extended [root@qiuri ~]# 驗證是否成功擴容: [root@qiuri ~]# vgdisplay vg0 |grep 'VG Size' VG Size 7.98 GB 卸載 /dev/hdd4的PV: [root@qiuri ~]# vgreduce vg0 /dev/hdd4 Removed "/dev/hdd4" from volume group "vg0" 驗證是否卸載: [root@qiuri ~]# vgdisplay vg0 |grep 'VG Size' VG Size 5.58 GB [root@qiuri ~]# 調整邏輯卷: [root@qiuri ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 28G 4.4G 23G 17% / /dev/hda1 99M 12M 83M 13% /boot tmpfs 163M 0 163M 0% /dev/shm /dev/mapper/vg0-lv0 985M 18M 918M 2% /mnt/lv0 擴大邏輯卷的容量,以擴大500M為例 [root@qiuri ~]# lvextend -L 500M /dev/vg0/lv0 Extending logical volume lv0 to 1.46 GB Logical volume lv0 successfully resized 查看當前lv容量,發現容量並沒有擴大 [root@qiuri ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 28G 4.4G 23G 17% /

/dev/hda1 99M 12M 83M 13% /boot tmpfs 163M 0 163M 0% /dev/shm /dev/mapper/vg0-lv0 985M 18M 918M 2% /mnt/lv0 [root@qiuri ~]# [root@qiuri ~]# lvreduce -L -200M /dev/vg0/lv0 WARNING: Reducing active and open logical volume to 1.27 GB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0? [y/n]: y Reducing logical volume lv0 to 1.27 GB Logical volume lv0 successfully resized [root@qiuri ~]# lvdisplay --- Logical volume --- LV Name /dev/vg0/lv0 VG Name vg0 LV UUID J2XIlI-Nag2-wwG0-l2dq-wNOZ-AgIK-9pNQ6O LV Write Access read/write LV Status available # open 1 LV Size 1.27 GB [root@qiuri ~]#umount /mnt/lv0 [root@qiuri ~]# resize2fs /dev/vg0/lv0 resize2fs 1.39 (29-May-2006) Please run 'e2fsck -f /dev/vg0/lv0' first. [root@qiuri ~]# e2fsck -f /dev/vg0/lv0 強制檢測硬碟 e2fsck 1.39 (29-May-2006) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/vg0/lv0: 11/128000 files (9.1% non-contiguous), 8444/256000 blocks [root@qiuri ~]# resize2fs /dev/vg0/lv0 resize2fs 1.39 (29-May-2006) Resizing the filesystem on /dev/vg0/lv0 to 384000 (4k) blocks. The filesystem on /dev/vg0/lv0 is now 384000 blocks long. [root@qiuri ~]# mount /dev/vg0/lv0 /mnt/lv0/ [root@qiuri ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 28G 4.4G 23G 17% / /dev/hda1 99M 12M 83M 13% /boot tmpfs

163M 0 163M 0% /dev/shm /dev/mapper/vg0-lv0 1.5G 18M 1.4G 2% /mnt/lv0 [root@qiuri ~]# 減小邏輯卷大小: 注意:如果當前邏輯卷有數據的話,注意先備份 [root@qiuri ~]# lvreduce -L -200M /dev/vg0/lv0 卸載: [root@qiuri ~]# lvremove /dev/vg0/lv0 Logical volume "lv0" successfully removed [root@qiuri ~]# vgremove vg0 Volume group "vg0" successfully removed [root@qiuri ~]# pvremove /dev/hdd1 /dev/hdd2 /dev/hdd3 /dev/hdd4 /dev/cdrom: open failed: Read-only file system Attempt to close device '/dev/cdrom' which is not open. Labels on physical volume "/dev/hdd1" successfully wiped Labels on physical volume "/dev/hdd2" successfully wiped Labels on physical volume "/dev/hdd3" successfully wiped Labels on physical volume "/dev/hdd4" successfully wiped [root@qiuri ~]# pvscan /dev/cdrom: open failed: Read-only file system Attempt to close device '/dev/cdrom' which is not open. PV /dev/hda2 VG VolGroup00 lvm2 [29.88 GB / 0 free] Total: 1 [29.88 GB] / in use: 1 [29.88 GB] / in no VG: 0 [0 ] [root@qiuri ~]# 實驗: RAID:5 LVM

本文出自 「it民工」 博客,請務必保留此出處http://zyw1209.blog.51cto.com/1266169/441583


[火星人 ] linux軟raid和lvm已經有649次圍觀

http://coctec.com/docs/linux/show-post-48507.html