fence_vmware usage with ESX or VCenter, or VSphere ... vmware product

简介:
在使用RHCS搭建HA时, fence是必不可少的, 主要是为了防止脑裂的发生, 例如共享存储时, 只有主节点读写, 当发生failover时, 需要先把主节点fence掉, 才会将服务在备节点激活. 确保同一时间只有一个主机在对存储进行读写.
在虚拟化环境中, 管理平台可以对虚拟机进行开关机的操作, 所以先天具备fence的功能.
只要管理平台开放开关机的API, 就可以实现fence. 
本文要说的是vmware的虚拟化产品的fence, RHCS带的fence_vmware就可以实现这个功能 . 

在man fence_vmware的STDIN PARAMETERS章节中, 是在/etc/cluster/cluster.conf中可以配置的参数. 
另外需要注意required parameter是必须配置的. (例如管理平台的IP, 用户, 密码, 以及要操作的虚拟机名, 以及操作选项(reboot, on, off....))

有些在帮助中没有指出可以配置的范围或枚举值的, 可以参考/usr/sbin/fence_vmware的定义.

注意fence_vmware 中的重要配置.
--ip 指虚拟化管理平台主机的IP(如ESX server), 不是虚拟机的IP.
--plug 指虚拟机的名称, 可能是指管理平台中对虚拟机的命名.
--username 指虚拟化管理平台分配的用户名
--password 密码
# fence_vmware --help
Usage:
        fence_vmware [options]
Options:
   -o, --action=<action>          Action: status, reboot (default), off or on
   -a, --ip=<ip>                  IP address or hostname of fencing device
   -l, --username=<name>          Login name
   -p, --password=<password>      Login password or passphrase
   -S, --password-script=<script> Script to run to retrieve password
   -n, --plug=<id>                Physical plug number on device or
                                        name of virtual machine
   -e, --exec=<command>           Command to execute
   -d, --vmware_type=<type>       Type of VMware to connect
   -x, --ssh                      Use ssh connection
   -k, --identity-file=<filename> Identity file (private key) for ssh 
   -s, --vmware-datacenter=<dc>   VMWare datacenter filter
   -v, --verbose                  Verbose mode
   -D, --debug-file=<debugfile>   Debugging to output file
   -V, --version                  Output version information and exit
   -h, --help                     Display this help and exit
   -C, --separator=<char>         Separator for CSV created by 'list' operation
   --power-timeout <seconds>      Test X seconds for status change after ON/OFF
   --shell-timeout <seconds>      Wait X seconds for cmd prompt after issuing command
   --login-timeout <seconds>      Wait X seconds for cmd prompt after login
   --power-wait <seconds>         Wait X seconds after issuing ON/OFF
   --delay <seconds>              Wait X seconds before fencing is started
   --retry-on <attempts>          Count of attempts to retry power on


帮助文件更详细一点, 例如配置/etc/cluster/cluster.conf时, stdin parameter即配置参数.
如 :
  <fencedevices>
    <fencedevice agent="fence_vmware" ipaddr="管理平台主机IP" login="管理平台用户" name="fence_154" passwd="密码" port="虚拟机在管理平台中的名字"/>
  </fencedevices>

    <clusternode name="虚拟机IP" nodeid="1">
      <fence>
        <method name="abc">
          <device action="reboot" name="fence_154"/>
        </method>
      </fence>
    </clusternode>

fence_vmware帮助  : 
man fence_vmware
FENCE_AGENT(8)                                                  FENCE_AGENT(8)

NAME
       fence_vmware - Fence agent for VMWare

DESCRIPTION
       fence_vmware is an I/O Fencing agent which can be used with the VMware ESX, VMware ESXi or VMware Server to fence virtual machines.

       Before you can use this agent, it must be installed VI Perl Toolkit or vmrun command on every node you want to make fencing.

       VI  Perl Toolkit is preferred for VMware ESX/ESXi and Virtual Center. Vmrun command is only solution for VMware Server 1/2 (this command will works against ESX/ESXi
       3.5 up2 and VC up2 too, but not cluster aware!) and is available as part of VMware VIX API SDK package. VI Perl and VIX API SDK are both available from  VMware  web
       pages (not int RHEL repository!).

       You  can  specify type of VMware you are connecting to with -d switch (or vmware_type for stdin). Possible values are esx, server2 and server1.Default value is esx,
       which will use VI Perl. With server1 and server2, vmrun command is used.

       After you have successfully installed VI Perl Toolkit or VIX API, you should be able to run fence_vmware_helper (part of this agent) or vmrun  command.  This  agent
       supports only vmrun from version 2.0.0 (VIX API 1.6.0).

       fence_vmware  accepts  options  on the command line as well as from stdin. Fenced sends parameters through stdin when it execs the agent. fence_vmware can be run by
       itself with command line options.  This is useful for testing and for turning outlets on or off from scripts.

       Vendor URL: http://www.vmware.com

PARAMETERS

       -o, --action=<action>
              Fencing Action (Default Value: reboot)

       -a, --ip=<ip>
              IP Address or Hostname This parameter is always required.

       -l, --username=<name>
              Login Name This parameter is always required.

       -p, --password=<password>
              Login password or passphrase

       -S, --password-script=<script>
              Script to retrieve password

       -n, --plug=<id>
              Physical plug number or name of virtual machine This parameter is always required.

       -e, --exec=<command>
              Command to execute

       -d, --vmware_type=<type>
              Type of VMware to connect (Default Value: esx)

       -x, --ssh
              SSH connection (Default Value: true)

       -k, --identity-file=<filename>
              Identity file for ssh

       -s, --vmware-datacenter=<dc>
              Show only machines in specified datacenter

       -v, --verbose
              Verbose mode

       -D, --debug-file=<debugfile>
              Write debug information to given file

       -V, --version
              Display version information and exit

       -h, --help
              Display help and exit

       -C, --separator=<char>
              Separator for CSV created by operation list (Default Value: ,)

       --power-timeout
              Test X seconds for status change after ON/OFF (Default Value: 20)

       --shell-timeout
              Wait X seconds for cmd prompt after issuing command (Default Value: 3)

       --login-timeout
              Wait X seconds for cmd prompt after login (Default Value: 5)

       --power-wait
              Wait X seconds after issuing ON/OFF (Default Value: 0)

       --delay
              Wait X seconds before fencing is started (Default Value: 0)

       --retry-on
              Count of attempts to retry power on (Default Value: 1)

ACTIONS

       on     Power on machine.

       off    Power off machine.

       reboot Reboot machine.

       status This returns the status of the plug/virtual machine.

       list   List available plugs with aliases/virtual machines if there is support for more then one device. Returns N/A otherwise.

       monitor
              Check if fencing device is running. List available plugs/virtual machines or get status of machine (if it does not support more).

       metadata
              Display the XML metadata describing this resource.

STDIN PARAMETERS

       action Fencing Action (Default Value: reboot)

       ipaddr IP Address or Hostname This parameter is always required.


       login  Login Name This parameter is always required.

       passwd Login password or passphrase

       passwd_script
              Script to retrieve password

       port   Physical plug number or name of virtual machine This parameter is always required.

       exec   Command to execute

       vmware_type
              Type of VMware to connect (Default Value: esx)

       secure SSH connection (Default Value: true)

       identity_file
              Identity file for ssh

       vmware_datacenter
              Show only machines in specified datacenter

       verbose
              Verbose mode

       debug  Write debug information to given file

       version
              Display version information and exit

       help   Display help and exit

       separator
              Separator for CSV created by operation list (Default Value: ,)

       power_timeout
              Test X seconds for status change after ON/OFF (Default Value: 20)

       shell_timeout
              Wait X seconds for cmd prompt after issuing command (Default Value: 3)

       login_timeout
              Wait X seconds for cmd prompt after login (Default Value: 5)

       power_wait
              Wait X seconds after issuing ON/OFF (Default Value: 0)

       delay  Wait X seconds before fencing is started (Default Value: 0)

       retry_on
              Count of attempts to retry power on (Default Value: 1)

fence_vmware (Fence Agent)        2009-10-20                    FENCE_AGENT(8)

对于枚举值vmware_type在帮助中没有指出, 可以查看fence_vmware脚本.
从脚本fence_vmware的内容可以知道支持的配置vmware_type can be esx,server2 or server1!
# less /usr/sbin/fence_vmware
# Default type of vmware
VMWARE_DEFAULT_TYPE="esx"

# Check vmware type, set vmware_internal_type to one of VMWARE_TYPE_ value and
# options["-e"] to path (if not specified)
def vmware_check_vmware_type(options):
        global vmware_internal_type

        options["-d"]=options["-d"].lower()

        if (options["-d"]=="esx"):
                vmware_internal_type=VMWARE_TYPE_ESX
                if (not options.has_key("-e")):
                        options["-e"]=VMHELPER_COMMAND
        elif (options["-d"]=="server2"):
                vmware_internal_type=VMWARE_TYPE_SERVER2
                if (not options.has_key("-e")):
                        options["-e"]=VMRUN_COMMAND
        elif (options["-d"]=="server1"):
                vmware_internal_type=VMWARE_TYPE_SERVER1
                if (not options.has_key("-e")):
                        options["-e"]=VMRUN_COMMAND
        else:
                fail_usage("vmware_type can be esx,server2 or server1!")

[转载几篇相关文章]
如果觉得以上信息量不够的话, 可以参考以下文章 : 

VMware fence agent & Red Hat Cluster Suite

Last updated 04-May-2010

Updates history:

  • 2010-05-04: Fence_vmware in RHEL 5.5 is currently fence_vmware_ng (same syntax but named as fence_vmware) so fence_vmware_ng is no longer needed.
  • 2010-15-01: Fence_vmware from RHEL 5.3, 5.4 doesn't work with ESX 4.0. Working agent included
  • 2009-10-07: Tested on ESX 4.0.0 and vCenter 4.0.0
  • 2009-01-19: Fixed vmware-ng. Old one mail fail, if somebody turn on VM (for example VMware cluster itself) before agent itself. Before, this leads to error and fence agent fail. Now only warning is displayed and fencing is considered sucessfull.
  • 2009-01-15: New vmware-ng. There is speed improvement with many VMs registered in VMware for status operation => whole fencing. Default type esx is now really default.

We have 2 agents for VMware virtual machines fencing.

  • First fence_vmware is in RHEL5.3 and 5.4/STABLE2 branch. It's designed and tested against VMware ESX server (not ESXi!) and Server 1.x. This one is replaced by new fence_vmware (currently named fence_vmware) in RHEL 5.5/STABLE3
  • Second in master/STALBLE3 branch. It's designed and tested against VMware ESX/ESXi/VC and Server 2.x, 1.x. This is what replaced old fence_vmware. (in master/STABLE3 is actually named fence_vmware)

Fence_vmware

It's union of two older agents. fence_vmware_vix and fence_vmware_vi.

VI (in following text, VI API is not only original VI Perl API with last version 1.6, but VMware vSphere SDK for Perl too) is VMware API for controlling their main business class of VMware products (ESX/VC). This API is fully cluster aware (VMware cluster). So this agent is able to do fencing guests machines physically running on ESX but managed by VC and able to work without any reconfiguration in case of migrating guest to another ESX.

VIX is newer API, working on VMware "low-end" products (Server 2.x, 1.x), but there is some support for ESX/ESXi 3.5 update 2 and VC 2.5 update 2. This API is NOT cluster aware, and recommended only for Server 2.x and 1.x. But if you are using only one ESX/ESXi or doesn't have VMware Cluster and never use migration, you can use this API too.

If you are using RHEL 5.5/RHEL 6 just install fence-agents package and you are ready to use fence_vmware. For distributions with older fence_agetns, you can get this agent from GIT (RHEL 5.5/STABLE3/master) repository and use (please make sure to use current library (fencing.py) too).

Pre-req

VI Perl API or/and VIX API installed on every node in cluster. This is big difference against older agent, where you don't need install anything, but new agent has little less painful configuration (and many bonuses)

Running

If you run fence_vmware with -h you will see something like this:

Options:
   -o <action>    Action: status, reboot (default), off or on
   -a <ip>        IP address or hostname of fencing device
   -l <name>      Login name
   -p <password>  Login password or passphrase
   -S <script>    Script to run to retrieve password
   -n <id>        Physical plug number on device or name of virtual machine
   -e             Command to execute
   -d             Type of VMware to connect
   -x             Use ssh connection
   -s             VMWare datacenter filter
   -q             Quiet mode
   -v             Verbose mode
   -D <debugfile> Debugging to output file
   -V             Output version information and exit
   -h             Display this help and exit

Now parameters one by one, little more deeper (format is short option - XML argument name - description).

  • o - action - This is same as with any other agent.
  • a - ipaddr - Hostname/IP address of VMware ESX/ESXi/VC or Server 2/1.x. You can enter tcp port in this option in usually way (hostname:port). Port is not needed for standard ESX/ESXi/VC installations, but Server 2.x runs management console on other than usual, so this is why you have this possibility.
  • l - login - This is login name for management console.
  • p - passwd - This is password for management console.
  • S - passwd_script - Script which retrieve password
  • n - port - Virtual machine name. This is in case of VI API guest name, you are able to see in VI Client (for example node1). For VIX Api, this name is in form [datacenter] path/name.vmx.
  • d - vmware_type - Type of VMware to connect. This parameter distinguish, what API you will use (VI or VIX). Possible values are esx, server2 and server1. Default is esx.
    • esx - VI API is used. Only one cluster aware, able to work with ESX/ESXi/VC
    • server2 - VIX API is used. Works for Server 2.x, ESX/ESXi 3.5 update 2 and VC 2.5 update 2, but not cluster aware!!!
    • server1 - VIX API. Works only for Server 1.x
  • s - vmware_datacenter - Used for filter available guests. Default is show all guests in all datacenters. With this, you will be able to fence same-named guests, if they are in different datacenters (so two node1 isn't any problem). If you never have same-named guests, this option is useless for you.
  • e - exec - Executable to operate. In every mode, this agent works by forking another helping program, which really does useful work. In case of VI, it's Perl vmware_fence_helper. In case of VIX, it's vmrun from VIX API package. If you have commands on nonstandard locations, you can use this option, to specify, where command lives.

Example usage of agent in CLI mode: You have VC (named vccenter) with node1 which you want to fence. You will use Administrator account with password pass.

fence_vmware -a vccenter -l Administrator -p pass -n 'node1'

If everything works, you can modify your cluster.conf as follows (in this example, you have two nodes, guest1 and guest2):

      ...
      <clusternodes>
              <clusternode name="guest1" nodeid="1" votes="1">
                      <fence>
                              <method name="1">
                                      <device name="vmware1"/>
                              </method>
                      </fence>
              </clusternode>
              <clusternode name="guest2" nodeid="2" votes="1">
                      <fence>
                              <method name="1">
                                      <device name="vmware2"/>
                              </method>
                      </fence>
              </clusternode>
      </clusternodes>
      <fencedevices>
              <fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware1" passwd="pass" port="guest1"/>
              <fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware2" passwd="pass" port="guest2"/>
      </fencedevices>
      ...

You can test setup with fence_node fqdn command.

Changing configuration from old fence_vmware to new fence_vmware

  • Install needed VI Perl API on every node
  • remove login, and passwd parameter
  • change vmlogin to login and vmpasswd to passwd
  • change port value to shorter name (basically remove /full/path/ and .vmx)
  • If you have vmipaddr, delete ipaddr and change vmippadr to ipaddr.

Problems

One of biggest problem of ESX 3.5/ESXi 3.5/VC 2.5 behaves very badly in case you have many virtual machines registered, because get list of VMs takes just too long. This will make fencing of larger datacenter unusable, because in case of 100+ registered VMs, whole fencing can take few minutes. This appears to be fixed in ESX 4.0.0/vCenter 4.0.0 (200+ registered VMs, fencing of one takes ~17 sec).. In case you don't want to upgrade, you can use separate datacenter for each cluster.

Old Fence_vmware

This is older fence agent, which should work on every ESX server, which has allowed ssh connection and has vmware-cmd command on it. Basic idea of this agent is to connect via ssh to ESX server, there run vmware-cmd which is able to run/shutdown virtual machine.

In ESX 4.0, vmware-cmd changed little, so it will not work anymore. You can solve this, by deleting lines 32 and 33 ('if options.has_key("-A"):' and 'cmd_line+=" -v"') or downloadfence_vmware.gz?, unpack it and replace original /sbin/fence_vmware.

Biggest problem of this solution is many parameters, which must be entered.

If you run fence_vmware with -h you will see something like this:

   -o <action>    Action: status, reboot (default), off or on
   -a <ip>        IP address or hostname of fencing device
   -l <name>      Login name
   -p <password>  Login password or passphrase
   -S <script>    Script to run to retrieve password
   -x             Use ssh connection
   -k <filename>  Identity file (private key) for ssh
   -n <id>        Physical plug number on device or name of virtual machine
   -A <ip>        IP address or hostname of managed VMware ESX (default localhost)
   -L <name>      VMware ESX management login name
   -P <password>  VMware ESX management login password
   -B <script>    Script to run to retrieve VMware ESX management password
   -q             Quiet mode
   -v             Verbose mode
   -D <debugfile> Debugging to output file
   -V             Output version information and exit
   -h             Display this help and exit

Now parameters one by one, little more deeper (format is short option - XML argument name - description).

  • o - action - This is same as with any other agent.
  • a - ipaddr - Hostname/IP address of VMware ESX ssh
  • l - login - This is login name for ESX ssh
  • p - passwd - This is password for ESX ssh
  • S - passwd_script - Script which retrieve password
  • A - vmipaddr - VMware ESX hostname/ip adress. Here it starts to be more confusing. What's the biggest difference between -a and -A? -a is address of computer where you want ssh to. -A is address of VMware server to operate. This is mostly localhost (think, that after you ssh to your ESX, you wan't operate on that machine -> localhost).
  • L - vmlogin - VMware ESX user login. Difference between -l and -L is mostly same as -a and -A. -L is login to VMware operating server.
  • P - vmpasswd - VMWare esx user password.
  • B - vmpasswd_script - Script to retrieve -P. This script is runned on GUEST! machine, not on VMware ESX machine!
  • n - port - Virtual machine name. This is output of vmware-cmd -L, mostly something like /vmfs/volumes/48bfcbd1-4624461c-8250-0015c5f3ef0f/Rhel/Rhel.vmx.

I'm big fan of pictures, so example situation:

  
+---------------------------------------------------------------------------------------+
| +----------                                                                           |
| | guest1  | ssh to VMware ESX - can be, where guest1 run                              |
| | RHEL 5  |------------------+                                                        |
| +---------+                  |                                                        |
|                             \/                                                        |
| +----------      +--------SSH (22)---------------------------------+                  |
| | guest2  |      |        ------> run vmware-cmd with params off --|-> Kill guest1 VM | 
| | RHEL 5  |      |                                                 |                  |
| +---------+      |    dom0 - VMware management console             |                  |
|                  | (192.168.1.1) - Has user test with password test|                  |
|                  |               - Has vmware-cmd                  |                  |
|                  +-------------------------------------------------+                  |
|                                                                                       |
|            VMware ESX hypervisor                                                      |
+---------------------------------------------------------------------------------------+

As you can see, guest1 connect to VMware management console (with hostname/login/password (-a/-l/-p) for ssh) and there, vmware-cmd is runned (with hostname/login/password (-A/-L/-P for VMware).

So why we have 2 set's of parameters? Because:

  • On guest machine, you don't need to install anything. No vmware-cmd. You just connect to other machine, which has this command.
  • On dom0, ssh is not allowed for user root. So we can't use root login/password.
  • Mostly, owner of Virtual Machine is root, but again, ssh is not allowed for that user.

Recomended way, how to use this agent is:

  • Install ESX server and set root pasword (for example root)
  • Create normal (non-root) user on VMware dom0 (in this example, I will suspect user test with password test)
  • Create virtual machine (there is funny thing. You must use Windows, because web console is not able to create new virtual machine :) )
  • Install your cluster node

If everything done, test fencing via command line (on one of guests)

fence_vmware -a 192.168.1.1 -l test -p test -L root -P root -o status -n /vmfs/volumes/48bfcbd1-4624461c-8250-0015c5f3ef0f/Rhel/Rhel.vmx

You should get status of virtual machine named Rhel.

If everything works, you can modify your cluster.conf like:

      ...
      <clusternodes>
              <clusternode name="guest1" nodeid="1" votes="1">
                      <fence>
                              <method name="1">
                                      <device name="vmware1"/>
                              </method>
                      </fence>
              </clusternode>
              <clusternode name="guest2" nodeid="2" votes="1">
                      <fence>
                              <method name="1">
                                      <device name="vmware2"/>
                              </method>
                      </fence>
              </clusternode>
      </clusternodes>
      <fencedevices>
              <fencedevice agent="fence_vmware" ipaddr="192.168.1.1" login="test" name="vmware1" passwd="test" vmlogin="root" vmpasswd="root" port="PATH_TO_VMX"/>
              <fencedevice agent="fence_vmware" ipaddr="192.168.1.1" login="test" name="vmware2" passwd="test" vmlogin="root" vmpasswd="root" port="PATH_TO_VMX"/>
      </fencedevices>
      ...

Recomendation for every VMware

The vmware "client" machine should have VMware Tools installed. So I recommend to install vmware tools in all cluster machine. This improve speed of guest.



rhel的集群软件,最让人纠结的就是fence这个设备,在xen的虚拟化平台,尚有"Virtual Machine Fence”可用,可"在Vmware虚拟化平台下,fence设备就没有现成的。我搜索了许多资料,基本上没看到解决办法,有人提出过RHCS是有可用的vmware fence,但语焉不详,又似遮遮掩掩。

功夫不负有心人,综合多方面资料,总算是找到解决办法:https://fedorahosted.org/cluster/wiki/VMware_FencingConfig 。该文章本事在redhat官网的,但是已经被删除,幸好这还有一篇,这个貌似是fedora的某个网站还有保存这篇文章,强烈建议先看看这篇文章。

文章简单说明:

1.Vcenter就具有fencing功能,可以当作一个fence设备,同理vmware vSphere(ESX)主机也是。

2.rhel5.4版本及其以下的系统中,用vmware_fence_ng连接fence,在rhel5.5及其以上版本的系统中,该工具已经被命名为vmware_fence,和vmware_fence_ng语法一样。

配置方法:

RHCS的安装及配置步骤略。

RHCS最重要的配置文件师/etc/cluster/cluster.conf文件,所有和集群有关的配置都在里面,包括fence的配置。

打开/etc/cluster/cluster.conf文件,定位到<clusternodes>...</clusternodes>和<fencedevices>...</fencedevices>。按照如下格式修改:

<clusternodes>

<clusternode name="guest1" nodeid="1" votes="1">

<fence>

<method name="1">

<device name="vmware1"/>

</method>

</fence>

<clusternode name="guest2" nodeid="2" votes="1">

<fence>

<method name="1">

<device name="vmware2"/>

</method>

</fence>

</clusternode>

<fencedevices>

<fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware1" passwd="pass" port="guest1"/>

<fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware2" passwd="pass" port="guest2"/>

</fencedevices>

说明:

agent固定为fence_vmware或者fence_vmware_ng,ipaddr为vccenter的主机名或者IP地址,login为vccenter的用户名,name为给该fence设置的命名,也就是device name,passwd为"login"登录的密码,port为该虚拟机在Vcenter中的命名。

然后安装在操作系统中安装vSphere SDK For Perlhttp://www.vmware.com/support/developer/viperltoolkit/。选择适合自己虚拟机主机的版本。解压安装,运行vmware_install.pl即可。

配置权限: 最后,在vcenter中为相应的账户配置权限(如果不介意用Administrator账户,也可以不用配置)。 选择相应的虚拟机,(给用户)添加权限。

 
    
fence_vmware usage with ESX or VCenter, or VSphere ... vmware product - 德哥@Digoal - PostgreSQL research
 

给相应的用户授权权限,由于fence的功能和作用,要授予开关机重启的权限,嫌麻烦就授予该虚拟机的最高权限。

 
    
fence_vmware usage with ESX or VCenter, or VSphere ... vmware product - 德哥@Digoal - PostgreSQL research
  RHCS的机群中的所有虚拟机都要如此设置权限。 以上只在rhel x64 5.6和vsphere 4.0中配置通过,其他情况,按需改动。

在pacemaker 中,  fence_vmvware 的一个示例配置: 

primitive fence_vm96 stonith:fence_vmware \
        params ipaddr="192.168.x.x" login="user" passwd="passwd" vmware_datacenter="GZ-xxx" vmware_type="esx" action="reboot" port="dba-test-Cos6.xx" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
        op monitor interval="20" timeout="60s" \
        op start interval="0" timeout="60s" \
        meta target-role="Started"


port  对应的命令行的fence_vmware 的-n 选项  

ipadd 对应的 是fence_vmware 命令行执行的 ip 选项



ipaddr 对应的  fence_vmware 的ip 选项。 

希望可以对后来人有个帮助。

[参考]
1. man fence_vmware
2. man fence_vmware_soap
3. /usr/sbin/fence_vmware

相关文章
|
2天前
|
Linux 网络安全 文件存储
VMware ESX Server下的一些命令
VMware ESX Server下的一些命令
|
7月前
|
存储 数据挖掘 虚拟化
服务器数据恢复-VMWARE ESX SERVER虚拟化数据恢复案例
服务器数据恢复环境: 几台VMware ESX SERVER共享一台某品牌存储,共有几十组虚拟机。 服务器故障: 虚拟机在工作过程中突然被发现不可用,管理员将设备进行了重启,重启后虚拟机依然不可用,虚拟磁盘丢失,数据无法使用。
|
虚拟化
vmware esx集成AD域认证
vmware esx越来越普遍了,但其身份认证是个大问题,让其和AD也就是ladp整合进行身份认证就显得尤为重要了。
804 0

相关课程

更多