ubuntu上配置multipath

简介: ubuntu上配置multipath

参考

框图

步骤

上文中在ubuntu下搭建了iscsi存储,下面基于iscsi存储创建multipath环境。

  • 给Host添加一块存储设备

/dev/nvme0n2

  • 配置tgt
    cat /etc/tgt/conf.d/iscsi.conf
cat /etc/tgt/conf.d/iscsi.conf
<target iqn.2023-02.pendl.com:disk1>
backing-store /dev/nvme0n1
initiator-address 192.168.159.144
</target>
<target iqn.2023-02.pendl.com:disk2>
backing-store /dev/nvme0n2
</target>

上面第二个target是要创建的multipath iscsi存储,对应的是host上的/dev/nvme0n2,这里没有指定IP地址,所有可以通过Host的多个IP访问。

  • 重启tgt,并查看状态
systemctl restart tgt
systemctl status tgt

可以看到如下信息:

➜  x86 sudo systemctl status tgt.service
● tgt.service - (i)SCSI target daemon
Loaded: loaded (/lib/systemd/system/tgt.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-02-13 17:35:23 PST; 2s ago
Docs: man:tgtd(8)
Process: 8403 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
Process: 8404 ExecStartPost=/usr/sbin/tgt-admin -e -c /etc/tgt/targets.conf (code=exited, status=0/SUCCESS)
Process: 8455 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v ready (code=exited, status=0/SUCCESS)
Main PID: 8402 (tgtd)
Status: "Starting event loop..."
Tasks: 33
Memory: 1.6M
CGroup: /system.slice/tgt.service
└─8402 /usr/sbin/tgtd -f
Feb 13 17:35:23 ubuntu systemd[1]: Starting (i)SCSI target daemon...
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: iser_ib_init(3431) Failed to initialize RDMA; load kernel modules?
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: work_timer_start(146) use timer_fd based scheduler
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: bs_init(387) use signalfd notification
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: device_mgmt(246) sz:18 params:path=/dev/nvme0n1
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: bs_thread_open(409) 16
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: device_mgmt(246) sz:18 params:path=/dev/nvme0n2
Feb 13 17:35:23 ubuntu tgtd[8402]: tgtd: bs_thread_open(409) 16
Feb 13 17:35:23 ubuntu systemd[1]: Started (i)SCSI target daemon.
  • 验证target
sudo tgtadm --mode target --op show

可以看到如下信息:

// 这里省去了Target1的信息
Target 2: iqn.2023-02.pendl.com:disk2
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET     00020000
SCSI SN: beaf20
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET     00020001
SCSI SN: beaf21
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
ALL

客户端

  • 可以再给客户端增加一块网卡

为了简单,客户端只用了一块网卡,只是服务端有多块网卡。

  • 发现target
root@ubuntu-vm:~# iscsiadm -m discovery -t st -p 192.168.159.130
192.168.159.130:3260,1 iqn.2023-02.pendl.com:disk2
root@ubuntu-vm:~# iscsiadm -m discovery -t st -p 192.168.159.144
192.168.159.144:3260,1 iqn.2023-02.pendl.com:disk1
192.168.159.144:3260,1 iqn.2023-02.pendl.com:disk2
  • 登录
root@ubuntu-vm:~# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2023-02.pendl.com:disk2, portal: 192.168.159.130,3260] (multiple)
Logging in to [iface: default, target: iqn.2023-02.pendl.com:disk2, portal: 192.168.159.144,3260] (multiple)
Logging in to [iface: default, target: iqn.2023-02.pendl.com:disk1, portal: 192.168.159.144,3260] (multiple)
Login to [iface: default, target: iqn.2023-02.pendl.com:disk2, portal: 192.168.159.130,3260] successful.
Login to [iface: default, target: iqn.2023-02.pendl.com:disk2, portal: 192.168.159.144,3260] successful.
Login to [iface: default, target: iqn.2023-02.pendl.com:disk1, portal: 192.168.159.144,3260] successful.

登录以后,服务端也会有相应的变化:

Target 2: iqn.2023-02.pendl.com:disk2
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 10
Initiator: iqn.2023-02.pendl.com:client alias: ubuntu-vm
Connection: 0
IP Address: 192.168.159.130
I_T nexus: 11
Initiator: iqn.2023-02.pendl.com:client alias: ubuntu-vm
Connection: 0
IP Address: 192.168.159.144
LUN information:
LUN: 0
Type: controller
SCSI ID: IET     00020000
SCSI SN: beaf20
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET     00020001
SCSI SN: beaf21
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/nvme0n2
Backing store flags:
Account information:
ACL information:
ALL
  • 安装multipath软件包
apt-get install multipath-tools
  • 查看multipath状态
root@ubuntu-vm:~# multipath -ll
mpathc (360000000000000000e00000000020001) dm-0 IET     ,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 2:0:0:1 sda 8:0  active ready running
  `- 3:0:0:1 sdb 8:16 active ready running
  • 分区格式化
root@ubuntu-vm:~# ls /dev/mapper/ -l
total 0
crw------- 1 root root 10, 236 Feb 14 09:42 control
lrwxrwxrwx 1 root root       7 Feb 14 10:24 mpathc -> ../dm-0

下面对/dev/mpathc进行分区格式化

root@ubuntu-vm:~# fdisk -l /dev/mapper/mpathc
Disk /dev/mapper/mpathc: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@ubuntu-vm:~# fdisk /dev/mapper/mpathc
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xf7a0076c.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-41943039, default 41943039):
Created a new partition 1 of type 'Linux' and of size 20 GiB.
Command (m for help): w
The partition table has been altered.
Syncing disks.
root@ubuntu-vm:~# mkfs.ext4 /dev/mapper/mpathc
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/mapper/mpathc
Proceed anyway? (y,N) y
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: cdf9c090-5de1-41d9-a264-5ccdc45a27e2
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
BASH 折叠 复制 全屏
  • 挂载
mount /dev/mapper/mpathc ./demo
相关文章
|
3月前
|
Ubuntu
Ubuntu配置Udev规则固定设备口
本文介绍了如何在Ubuntu系统中通过配置udev规则来固定设备的端口,详细说明了如何查看设备信息、创建udev规则文件、设置设备权限,并加载及重启udev规则以应用更改。
239 0
Ubuntu配置Udev规则固定设备口
|
9天前
|
Ubuntu Shell 开发工具
ubuntu/debian shell 脚本自动配置 gitea git 仓库
这是一个自动配置 Gitea Git 仓库的 Shell 脚本,支持 Ubuntu 20+ 和 Debian 12+ 系统。脚本会创建必要的目录、下载并安装 Gitea,创建 Gitea 用户和服务,确保 Gitea 在系统启动时自动运行。用户可以选择从官方或小绿叶技术博客下载安装包。
23 2
|
1月前
|
网络协议 Ubuntu 网络安全
|
1月前
|
消息中间件 监控 Ubuntu
大数据-54 Kafka 安装配置 环境变量配置 启动服务 Ubuntu配置 ZooKeeper
大数据-54 Kafka 安装配置 环境变量配置 启动服务 Ubuntu配置 ZooKeeper
74 3
大数据-54 Kafka 安装配置 环境变量配置 启动服务 Ubuntu配置 ZooKeeper
|
1月前
|
资源调度
Ubuntu22.04静态ip配置+yarn build后显示内存超限,变异失败
Ubuntu22.04静态ip配置+yarn build后显示内存超限,变异失败
37 2
Ubuntu22.04静态ip配置+yarn build后显示内存超限,变异失败
|
1月前
|
Ubuntu Linux 编译器
Linux/Ubuntu下使用VS Code配置C/C++项目环境调用OpenCV
通过以上步骤,您已经成功在Ubuntu系统下的VS Code中配置了C/C++项目环境,并能够调用OpenCV库进行开发。请确保每一步都按照您的系统实际情况进行适当调整。
270 3
|
2月前
|
Ubuntu 网络安全 开发工具
Ubuntu19.04的安装过程详解以及操作系统初始化配置
本文详细介绍了Ubuntu 19.04操作系统的安装过程、初始化配置、网络设置、软件源配置、SSH远程登录以及终端显示设置。
88 1
Ubuntu19.04的安装过程详解以及操作系统初始化配置
|
2月前
|
存储 Prometheus 监控
在Ubuntu系统上安装与配置Prometheus的步骤
通过以上步骤,您应该已经成功在Ubuntu系统上安装并配置了Prometheus。您现在可以开始使用Prometheus收集和分析您的系统和应用程序的指标数据了。
170 1
|
1月前
|
Ubuntu 网络协议 Linux
liunx各大发行版(centos,rocky,ubuntu,国产麒麟kylinos)网卡配置和包管理方面的区别
liunx各大发行版(centos,rocky,ubuntu,国产麒麟kylinos)网卡配置和包管理方面的区别
108 0
|
2月前
|
Ubuntu Oracle 关系型数据库
Oracle VM VirtualBox之Ubuntu 22.04LTS双网卡网络模式配置
这篇文章是关于如何在Oracle VM VirtualBox中配置Ubuntu 22.04LTS虚拟机双网卡网络模式的详细指南,包括VirtualBox网络概述、双网卡网络模式的配置步骤以及Ubuntu系统网络配置。
243 3