Перед настройкой прочесть https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/DM_Multipath/index.html
# cat /etc/redhat-release
CentOS release 6.4 (Final)
Файла /etc/multipath.conf не существует – создаем его.
#mpathconf --enable --with_multipathd y
Создается конфигурационный файл по умолчанию.
Запускаем службу multipathd
#service multipathd start
Запуск multipathd при загрузке сервера.
#chkconfig multipathd on
Добавляем диски на СХД….
Особенность EMC, AX4-5
AX4-5 создает LUNZ (LUN ZERO) для видимости с хоста через FC HBA самой СХД (? нужно для EMC PowerPath ?).
# multipath -v3 Oct 04 16:12:48 | ram0: device node name blacklisted Oct 04 16:12:48 | ram1: device node name blacklisted Oct 04 16:12:48 | ram2: device node name blacklisted Oct 04 16:12:48 | ram3: device node name blacklisted Oct 04 16:12:48 | ram4: device node name blacklisted Oct 04 16:12:48 | ram5: device node name blacklisted Oct 04 16:12:48 | ram6: device node name blacklisted Oct 04 16:12:48 | ram7: device node name blacklisted Oct 04 16:12:48 | ram8: device node name blacklisted Oct 04 16:12:48 | ram9: device node name blacklisted Oct 04 16:12:48 | ram10: device node name blacklisted Oct 04 16:12:48 | ram11: device node name blacklisted Oct 04 16:12:48 | ram12: device node name blacklisted Oct 04 16:12:48 | ram13: device node name blacklisted Oct 04 16:12:48 | ram14: device node name blacklisted Oct 04 16:12:48 | ram15: device node name blacklisted Oct 04 16:12:48 | loop0: device node name blacklisted Oct 04 16:12:48 | loop1: device node name blacklisted Oct 04 16:12:48 | loop2: device node name blacklisted Oct 04 16:12:48 | loop3: device node name blacklisted Oct 04 16:12:48 | loop4: device node name blacklisted Oct 04 16:12:48 | loop5: device node name blacklisted Oct 04 16:12:48 | loop6: device node name blacklisted Oct 04 16:12:48 | loop7: device node name blacklisted Oct 04 16:12:48 | sda: device node name blacklisted Oct 04 16:12:48 | sdb: not found in pathvec Oct 04 16:12:48 | sdb: mask = 0x3f Oct 04 16:12:48 | sdb: dev_t = 8:16 Oct 04 16:12:48 | sdb: size = 0 Oct 04 16:12:48 | sdb: subsystem = scsi Oct 04 16:12:48 | sdb: vendor = DGC Oct 04 16:12:48 | sdb: product = LUNZ Oct 04 16:12:48 | sdb: rev = 0223 Oct 04 16:12:48 | sdb: h:b:t:l = 5:0:0:0 Oct 04 16:12:48 | sdb: tgt_node_name = 0x50060160cbe01e5a Oct 04 16:12:48 | (null): (DGC:LUNZ) vendor/product blacklisted Oct 04 16:12:48 | sr0: device node name blacklisted Oct 04 16:12:48 | dm-0: device node name blacklisted Oct 04 16:12:48 | dm-1: device node name blacklisted Oct 04 16:12:48 | dm-2: device node name blacklisted Oct 04 16:12:48 | sdc: not found in pathvec Oct 04 16:12:48 | sdc: mask = 0x3f Oct 04 16:12:48 | sdc: dev_t = 8:32 Oct 04 16:12:48 | sdc: size = 0 Oct 04 16:12:48 | sdc: subsystem = scsi Oct 04 16:12:48 | sdc: vendor = DGC Oct 04 16:12:48 | sdc: product = LUNZ Oct 04 16:12:48 | sdc: rev = 0223 Oct 04 16:12:48 | sdc: h:b:t:l = 6:0:0:0 Oct 04 16:12:48 | sdc: tgt_node_name = 0x50060160cbe01e5a Oct 04 16:12:48 | (null): (DGC:LUNZ) vendor/product blacklisted ===== no paths =====
Если создать первый диск в Storage Group на СХД и связать его с хостом, то диск все равно не появится, т.к. он будет попадать под LUNZ.
Чтобы это обойти, как вариант, создается в КАЖДОМ Storage Group САМЫЙ ПЕРВЫЙ диск с минимальным размеров (1М), который в последствии и будет LUNZ и далее создавать диски нужного размера.
Пересканируем шины scsi или fc
# echo "- - -" > /sys/class/scsi_host/host_number/scan
или
# echo "1" > /sys/class/fc_host/host#/issue_lip
Скрипт для пересканирования всех FC HBA
cat /root/shell/fc_rescan.sh #!/bin/sh for host in `ls /sys/class/fc_host/`; do echo "1" > /sys/class/fc_host/${host}/issue_lip done
После создания первого 1M диска и второго нужного размера по FC доступы новые диски sdd и sde
# multipath -v3 ---skip--- Oct 21 14:42:37 | sdd: not found in pathvec Oct 21 14:42:37 | sdd: mask = 0x3f Oct 21 14:42:37 | sdd: dev_t = 8:48 Oct 21 14:42:37 | sdd: size = 5138022400 Oct 21 14:42:37 | sdd: subsystem = scsi Oct 21 14:42:37 | sdd: vendor = DGC Oct 21 14:42:37 | sdd: product = RAID 10 Oct 21 14:42:37 | sdd: rev = 0223 Oct 21 14:42:37 | sdd: h:b:t:l = 6:0:0:1 Oct 21 14:42:37 | sdd: tgt_node_name = 0x50060160cbe01e5a Oct 21 14:42:37 | sdd: serial = SL7F1105000019 Oct 21 14:42:37 | sdd: get_state Oct 21 14:42:37 | loading /lib64/multipath/libcheckemc_clariion.so checker Oct 21 14:42:37 | sdd: path checker = emc_clariion (controller setting) Oct 21 14:42:37 | sdd: checker timeout = 30000 ms (sysfs setting) Oct 21 14:42:37 | sdd: state = running Oct 21 14:42:37 | sdd: state = 3 Oct 21 14:42:37 | sdd: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (controller setting) Oct 21 14:42:37 | sdd: uid = 36006016033202b003c630cbef22ce311 (callout) Oct 21 14:42:37 | sdd: state = running Oct 21 14:42:37 | sdd: detect_prio = 1 (config file default) Oct 21 14:42:37 | loading /lib64/multipath/libprioemc.so prioritizer Oct 21 14:42:37 | sdd: prio = emc (controller setting) Oct 21 14:42:37 | sdd: emc prio = 0 Oct 21 14:42:37 | dm-3: device node name blacklisted Oct 21 14:42:37 | sde: not found in pathvec Oct 21 14:42:37 | sde: mask = 0x3f Oct 21 14:42:37 | sde: dev_t = 8:64 Oct 21 14:42:37 | sde: size = 5138022400 Oct 21 14:42:37 | sde: subsystem = scsi Oct 21 14:42:37 | sde: vendor = DGC Oct 21 14:42:37 | sde: product = RAID 10 Oct 21 14:42:37 | sde: rev = 0223 Oct 21 14:42:37 | sde: h:b:t:l = 5:0:0:1 Oct 21 14:42:37 | sde: tgt_node_name = 0x50060160cbe01e5a Oct 21 14:42:37 | sde: serial = SL7F1105000019 Oct 21 14:42:37 | sde: get_state Oct 21 14:42:37 | sde: path checker = emc_clariion (controller setting) Oct 21 14:42:37 | sde: checker timeout = 30000 ms (sysfs setting) Oct 21 14:42:37 | sde: state = running Oct 21 14:42:37 | sde: state = 3 Oct 21 14:42:37 | sde: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (controller setting) Oct 21 14:42:37 | sde: uid = 36006016033202b003c630cbef22ce311 (callout) Oct 21 14:42:37 | sde: state = running Oct 21 14:42:37 | sde: detect_prio = 1 (config file default) Oct 21 14:42:37 | sde: prio = emc (controller setting) Oct 21 14:42:37 | sde: emc prio = 1
Т.к у дисков одинаковый uid (36006016033202b003c630cbef22ce311), то multipath создает один диск доступный через разные FC HBA
# multipath -l 36006016033202b003c630cbef22ce311 dm-3 DGC,RAID 10 size=2.4T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw |-+- policy='round-robin 0' prio=0 status=active | `- 5:0:0:1 sde 8:64 active undef running `-+- policy='round-robin 0' prio=0 status=enabled `- 6:0:0:1 sdd 8:48 active undef running
ll /dev/mapper | grep 36006016033202b003c630cbef22ce311 lrwxrwxrwx. 1 root root 7 Oct 5 18:10 36006016033202b003c630cbef22ce311 -> ../dm-3
В дальшейшем диск /dev/mapper/36006016033202b003c630cbef22ce311 используется для создания логических томов и т.п.
Дополнительно, можно задать альтернативное имя создаваемого диска, добавив в /etc/multipath.conf:
multipaths { multipath { wwid 36006016033202b003c630cbef22ce311 alias clouds user_friendly_names no } }
/etc/init.d/multipathd restart
#ll /dev/mapper | grep clouds lrwxrwxrwx. 1 root root 7 Oct 4 15:27 clouds -> ../dm-3
to be continue…