Device Mapper Multipath(DM-Multipath)

简介:
Device Mapper Multipath(DM-Multipath)能够将server节点和存储阵列之间的多条I/O链路配置为一个单独的设备。这些I/O链路是由不同的线缆、交换机、控制器组成的SAN物理链路。

Multipath将这些链路聚合在一起,生成一个单独的新的设备。

 1.DM-Multipath概览:

(1)数据冗余

DM-Multipath能够实如今active/passive模式下的灾难转移。在active/passive模式下。仅仅有一半的链路在工作,假设链路上的某一部分(线缆、交换机、控制器)出现问题,DM-Multipath就会切换到还有一半链路上。

(2)提高性能

DM-Multipath也能够配置为active/active模式,从而I/O任务以round-robin的方式分布到全部的链路上去。通过配置,DM-Multipath还能够检測链路上的负载情况,动态地进行负载均衡。

 

DM-Multipath的存储阵列的支持情况能够查看multipath.conf.default。对于支持列表中没有的存储,能够在multipath的配置文件multipath.conf中自行加入。

2.DM-Multipath的组成:

(1)dm-multipath内核模块实现I/O重定向、支持failover以及链路的聚合

(2)mutilpath命令查看和配置multipath设备。它一般由/etc/rc.sysinit启动。也会在系统发现新的块设备时由udev启动,或者由initramfs在系统启动时运行。

(3)multipathd后台进程监控链路。当链路失效或者恢复时。它会发起链路组的切换。它实现了对multipath设备的动态改动。只是multipath.conf改动后,它须要又一次启动。

(4)kpartx命令依据设备上分区表创建device mapper设备。当须要在DM-MP设备上使用基于DOS的分区时,这个命令是必须。

不使用DM-Multipath,每一条从server到存储的链路都被系统识别为一个单独的设备。

DM-Multipath能够在这些底层的设备之上创建一个单一的multipath设备。实现对这些链路的组织和管理。

3.Multipath设备的标志符

 

每个multipath设备都有一个WWID(World Wide Identifier),这个id是全球唯一并且不可更改的。默认情况下,multipath设备的名称被设置为它的WWID。只是也能够在配置文件里使用_friendly_names选项。为设备取一个别名,别名为mpath[n]。

比如。一台server节点有2块HBA卡,通过一台没有划分zone的交换机。连接到2个磁盘阵列控制器。server系统中会发现4个设备:/dev/sda、/dev/sdb、/dev/sdc、/dev/sdd。DM-Multipath会依照配置文件在这些底层设备上创建一个拥有一个唯一WWID号的multipath设备。假设配置文件里_friendly_names选项被设置为yes,则这个multipath设备会被命名为mpath[n]。

 

当新的设备被DM-Multipath接管之后。新的设备文件会在/dev/文件夹下3个不同的地方出现:/dev/mapper/mpath[n]、/dev/mpath/mpath[n]、/dev/dm-[n]。

(1)/dev/mapper/文件夹下的文件。是早在系统启动的过程中就创建了。

訪问multipath设备时就使用这些文件。比如创建lvm。

(2)/dev/mpath/文件夹下的文件。是为了能够方便的在同一个文件夹下查看全部的multipath设备。这个文件由udev创建。

假设系统在启动过程中须要訪问multipath设备,不要使用这些文件。不用在这些设备文件上建立lvm;

(3)/dev/dm-[n]仅仅为了内部使用目的,永远不要对这些文件进行操作。

4.统一multipath设备的命名

当配置文件里_friendly_names被设置为yes,在该server节点上这个设备名是唯一并且确定的。可是不能保证在使用这些链路的其他server节点上的multipath设备的名称可以相互保持一致。假设仅仅是建立lvm。那么这个问题不会有什么影响。可是假设希望不同server节点上的multipath设备的名称可以统一,必须使用以下当中一种方法:

(1)在配置文件里的multipaths段使用alias选项为设备设置别名,并在不同的server上保持一致;

(2)假设希望不同server上multipath设备的user-friendly名称保持一致。首先在一台server上建立全部multipath设备,然后把bindings文件复制到全部其他系统统一命名的server。binds文件的位置是/var/lib/multipath/bindings。在配置文件能够使用bindings_file參数改动bindings文件的位置。

5.Multipath设备上建立lv

创建multipath设备之后,能够像使用物理设备文件一样在multipath设备上建立pv。比如。假定multipath设备为/dev/mapper/mpath0,使用

       pvscreate /dev/mapper/mpath0

就可以将mpath0建立为物理卷。

相同能够继续建立卷族和逻辑卷。

当在配置为active/passive模式的multipath设备上建立逻辑卷时。须要在lvm的配置文件lvm.conf中添加过滤器,将multipath设备下层的设备加入到过滤列表中。

这时由于DM-Multipath会自己主动切换数据链路,当遇到failover和failback的情况时,假设下层的设备没有在配置文件里过滤。lvm会扫描这些passive状态下的数据链路。Passive状态的链路改变到active状态须要运行一些命令。所以lvm就会在这个时候报错。

为了过滤全部的SCSI设备。在lvm.conf中的devices段,加入以下的配置:

filter = [ "r/disk/", "r/sd.*/", "a/.*/"

6.部署DM-Multipath

       6.1開始部署

       (1)编辑/etc/multipath.conf,凝视掉以下几行:

              devnode_blacklist {

                      devnode "*"

              }

       (2)multipath的默认配置已经集成在系统之中,不须要在/etc/multipath.conf中又一次配置。

       path_grouping_policy的默认值为failover。在原始配置中default段设置了multipath设备的默认名称是mpath[n]的形式,假设没有这一段配置(即_friendly_names=yes),设备的默认名称是它的WWID号。

       (3)保存配置文件并退出。

       (4)运行下列命令:

              modprobe dm-multipath

              sevice multipathd start

              multipath -v2

       注:multipath -v2会打印出已经聚合的数据链路。

       (5)使用 chkconfig multipathd on,让multipath服务开机自己主动启动。

 

       6.2排除本地scsi磁盘

       非常多系统都安装有本地scsi磁盘。DM-Multipath是不建议在这些磁盘上使用的。

能够依照以下的步骤取消对本地scsi磁盘的映射。

       (1)使用 multipath -v2 确认本地磁盘的信息。如以下的演示样例(sda为本地scsi磁盘):

       [root@localhost ~]# multipath -v2

       create: SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1

       [size=33 GB][features="0"][hwhandler="0"]

       \_ round-robin 0

         \_ 0:0:0:0 sda  8:0   

        device-mapper ioctl cmd 9 failed: Invalid argument

        device-mapper ioctl cmd 14 failed: No such device or address

        create: 3600a0b80001327d80000006d43621677

        [size=12 GB][features="0"][hwhandler="0"]

        \_ round-robin 0

          \_ 2:0:0:0 sdb  8:16   

            \_ 3:0:0:0 sdf  8:80   

        create: 3600 a0b80001327510000009a436215ec

        [size=12 GB][features="0"][hwhandler="0"]

        \_ round-robin 0

          \_ 2:0:0:1 sdc  8:32   

            \_ 3:0:0:1 sdg  8:96   

        create: 3600a0b80001327d800000070436216b3

        [size=12 GB][features="0"][hwhandler="0"]

        \_ round-robin 0

          \_ 2:0:0:2 sdd  8:48   

            \_ 3:0:0:2 sdh  8:112  

        create: 3600a0b80001327510000009b4362163e

        [size=12 GB][features="0"][hwhandler="0"]

        \_ round-robin 0

            \_ 2:0:0:3 sde  8:64   

            \_ 3:0:0:3 sdi  8:128

        (2)为了防止DM-Multipath对/dev/sda做映射。编辑/etc/multipath.conf中的devnode_blacklist段。

能够使用devnode的方式过滤sda,只是系统中sda的命名不一定是固定的,所以最好使用wwid的方式。从上面的输出中能够看到/dev/sda的wwid为“SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1”,在配置文件加入:

        devnode_blacklist{

              wwid SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1

        }

        (3)运行以下的命令使配置生效,并又一次打印multipath设备列表。

              multipath -F

              multipath -v2

 

       6.3在DM-Multipath中添加新的设备类型

       DM-Multipath支持大部分的存储阵列。默认的配置。能够查看multipath.conf.default文件。

       假设希望加入默认不支持的存储设备,能够在/etc/multipath.conf中加入对应的信息。

比如在配置为鉴中加入HP Open-V:

              devices {

                     device {

                            vendor "HP"

                            product "OPEN-V"

                            getuid_callout "/sbin/scsi_id -g -u -p0x80 -s /block/%n"

                     }

              }

 7.DM-Multipath配置文件

       7.1概览

       DM-Multipath的配置文件分为下面几个部分:

       devnode_blacklist

           不使用DM-Multipath的设备列表。

默认的情况下,全部的设备都在列表中。启用DM-Multipath的时候通常会将devnote_blacklist段凝视掉。

       defaults

           DM-Multipath的默认通用配置;

       multipaths

           单独配置每个multipath设备的属性。这些配置会覆盖在defaults段和devices段的配置。

       devices

           单独配置每个存储控制器。

这些配置会覆盖defaults段的配置。假设使用的存储控制器不被DM-Multipath支持,那么就须要为这样的类型的控制器加一个devices subsection。

       DM-Multipath确定multipath设备的属性时,会首先使用multipaths段的内容,然后是devices段,最好读取defaults段。

 

       7.2配置blacklist 
       devnode_blacklist指定了系统在配置multipath设备时不使用的设备,默认情况全部的设备都在这个列表中。凝视掉默认的一行之后,能够在列表中增加某一种类型的设备或者某一特定的设备。禁用设备有2种方法:

           (1)使用wwid:

           能够使用wwid指定特定的设备,如:

           blacklist {

               wwid 26353900f02796769

           }

           (2)使用设备名:

           如:

           devnode_blacklist {

               devnode "^sd[a-z]"

           }

           这一段配置会禁用全部的SCSI磁盘设备。尽管能够使用这样的方法禁用单一特定的设备。可是并不建议这样做。由于除非是使用了udev固定了设备的设备名。则设备的名称在每次重新启动之后是有可能发生变化的。

           由于一些设备并不支持DM-Multipath,所以以下列出的设备是默认禁用的:

           blacklist {

               devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"

                devnode "^hd[a-z]"

                devnode "^cciss!c[0-9]d[0-9]*"

           }

 

device-mapper-multipathing。须要全然的升级操作系统。早期的红帽企业Linux 4
不包括这个功能。编辑文件/etc/multipath.conf。在文件头部凝视掉例如以下内容: 
devnode_blacklist {
         devnode "*"
 }
例如以下例: 
# devnode_blacklist {
 #        devnode "*"
 # }
取消文件 /etc/multipath.conf中这段的凝视,这段内容能让 device-mapper multipathing 不用扫描全部的设备。

etc/multipath.conf file:
 defaults {
       multipath_tool  "/sbin/multipath -v0"
       udev_dir        /dev
       polling_interval 10
       default_selector        "round-robin 0"
       default_path_grouping_policy    multibus
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_wmin_io              100
       failback                immediate
 }
 devnode_blacklist {
       wwid 26353900f02796769
       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
       devnode "^hd[a-z][[0-9]*]"
       devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
 }
这段设置了默认的 device-mapper 的动作。而且去出了通常不会有多路径的设备。如IDE硬盘和软盘。

默认的hd*设备的黑名单存在这一个排印错误。须要改动。
devnode "^hd[a-z][[0-9]*]"
把上行改动为例如以下: 
devnode "^hd[a-z][0-9]*"
为了实现简单的failover功能,下例中,defaults 组中的默认的 default_path_grouping_policy 选项被设置成为 failover。
defaults {
       multipath_tool         "/sbin/multipath -v0"
       udev_dir                /dev
       polling_interval        10
       default_selector        "round-robin 0"
       default_path_grouping_policy    failover
       default_getuid_callout  "/sbin/scsi_id -g -u -s /block/%n"
       default_prio_callout    "/bin/true"
       default_features        "0"
       rr_wmin_io              100
       failback                immediate
 }
退出编辑并保存设置,运行例如以下命令 
modprobe dm-multipath
 modprobe dm-round-robin
 service multipathd start
 multipath -v2
命令 multipath -v2 能显示多路径,从而能知道那些设备工作在多路径下。假设没有全部输出。确认全部的SAN连接被正确的设置。系统有没有正确的开启多路径功能。

运行例如以下命令确认多路径服务有无启动。
chkconfig multipathd on
设备的设备名会被生成,/dev/dm-#,#指代的是多路径组,假设/dev/sda是唯一的多路径设备,/dev/dm-0将会是
/dev/sda和/dev/sdb的多路径设备。

注意:fdisk不能用于设备/dev/dm-#。使用fdisk仅仅能操作基础磁盘,要在设备映射多路
 径映射设备上创建/dev/dm-#分区的操作. 运行一下命令。 
kpartx -a /dev/dm-#
注意: dmsetup ls ?

target=multipath
是个协助侦測系统上多路径设备的命令。假设在多路径设备数据库中没有发现硬件,请查看文章“How can I add moreproducts
 into the mutipathing database?”

 

device-mapper-multipath-0.4.7

=============================
RHEL4 U3 Device Mapper Multipath Usage

Maintainer
------------
Benjamin Marzinski
bmarzins@redhat.com

Overview
------------
Device Mapper Multipath (DM-MP) allows nodes to route I/O over multiple paths to
a storage controller. A path refers to the connection from an HBA port to a
storage controller port. As paths fail and new paths come up, DM-MP reroutes 
the I/O over the available paths.

When there are multiple paths to a storage controller, each path
appears as a separate device.  DM-MP creates a new device on top of
those devices. For example, a node with two HBAs attached to a storage
controller with two ports via a single unzoned FC switch sees four
devices: /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. DM-MP creates a
single device, /dev/mpath/mpath1 that reroutes I/O to those four
underlying devices.

DM-MP consists of the
following components:

o dm-multipath kernel module -- This module reroutes I/O and fails
  over paths and path groups.

o multipath command -- This command configures, lists, and removes multipath
  devices. The command is run in rc.sysinit during startup, and by
  udev, whenever a block device is added.

o multipathd daemon --  This daemon monitors paths, checking to see if faulty
  paths have been fixed. As paths come back up, multipathd may also
  initiate path group switches to ensure that the optimal path group
  is being used. Also, it is possible to interactively modify a
  multipath device.

o kpartx command -- This command creates Device Mapper devices for the
  partitions on a device. It is necessary to use this command for DOS-
  based partitions with DM-MP.


DM-MP works with a variety of storage arrays. It
auto-configures the following storage arrays:

o 3PARdata VV
o Compaq HSV110
o Compaq MSA1000
o DDN SAN DataDirector
o DEC HSG80
o EMC SYMMETRIX
o EMC CLARiiON
o FSC CentricStor
o GNBD
o HITACHI DF400
o HITACHI DF500
o HITACHI DF600
o HP HSV110
o HP HSV210
o HP A6189A
o HP Open-
o IBM 3542
o IBM ProFibre 4000R
o NETAPP
o SGI TP9100
o SGI TP9300
o SGI TP9400
o SGI TP9500
o STK OPENstroage D280
o SUN StorEdge 3510
o SUN T4

Storage arrays not included in the list may require entries in the
/etc/multipath.conf file.

NOTE: Some storage arrays require special handling of I/O errors and
      path-group switching. Those require separate hardware handler
      kernel modules.

 

Terms and Concepts
---------------------

Hardware Handler:
        A kernel module that performs hardware-specific actions when switching
        path groups and dealing with I/O errors.

Path:
        The connection from an HBA port to a storage controller port for a LUN.
        Each path appears as a separate device. Paths can be in
        various states (refer to "Path States").

Path States:
        ready  - Path is able to handle I/O requests.
        shaky  - Path is up, but temporarily not available for normal
                 operations.
        faulty - Path is unable to handle I/O requests.
        ghost  - Path is a passive path, on an active/passive controller.

        NOTE: The shaky and ghost states only exist for certain
              storage arrays.

Path Group:
        A grouping of paths. With DM-MP, only one path group--the
        active path group--receives I/O at any time. Within a path
        group, DM-MP selects which ready path should receive I/O in a
        round robin fashion. Path groups can be in various states (refer to
        "Path Group States").

Path Group States:
        active   - Path group currently receiving I/O requests.
        enabled  - Path groups to try if the active path group has no paths
                   in the ready state.
        disabled - Path groups to try if the active path group and all
                   enabled path groups have no paths in the active state.

        NOTE: The disabled state only exists for certain storage arrays.

Path Priority:
        Each path can have a priority assigned to it by a callout program.
        Path priorities can be used to group paths by priority and change
        their relative weights for the round robin path selector.

Path Group Priority:
        Each path group has a priority that is equal to the sum of the
        priorities of all the non-faulty paths in the group. By default, the
        multipathd daemon tries to ensure that the path group with the
        highest priority is always in the active state.

Failover:
        When I/O to a path fails, the dm-multipath module tries to switch to 
        an enabled path group. If there are no enabled path groups with
        any paths in the ready state, dm-multipath tries to switch to a disabled 
        path group. If necessary, dm-multipath runs the hardware handler for the
        multipath device.

Failback:
        At regular intervals, multipathd checks the current priority of
        all path groups. If the current path group is not the highest priority
        path group, multipathd reacts according to the failback mode.
        By default, multipathd immediately switches to the highest priority
        path group. Other options for multipathd are to (a) wait for a
        user-defined length of time (for the path groups to stabilize)
        and then switch or (b) for multipathd to do nothing and wait for
        manual intervention.  Failback can be forced at any time by
        running the multipath command.

Multipath device:
        The multipath device is the device mapper device created by
        dm-multipath. A multipath device can be identified by either
        its WWID or its alias. A multipath device has one or more path
        groups. It also has numerous attributes defined in the
        following file:
        /usr/share/doc/device-mapper-multipathd-0.4.5/multipath.conf.annotated

alias:
        The alias is the name of a multipath device. By default, the
        alias is set to the WWID. However, by setting the
        "user_friendly_names" configuration option, the alias is set to a
        unique name of the form mpath<n>. The alias name can also be
        explicitly set for each multipath device in the configuration file.

        NOTE: While the alias in guaranteed to be unique on a node, it
              is not guaranteed to be the same on all nodes using the
              multipath device. Also, it may change.

WWID:
        The WWID (World Wide Identifier) is an identifier for the
        multipath device that is guaranteed to be globally unique and
        unchanging. It is determined by the getuid callout program.


Using DM-MP
------------------------------------

Initial setup:

1. If it is not already installed. Install the device-mapper-multipath
   package.

2. Edit /etc/multipath.conf. For new installations, all devices are blacklisted.
   The default blacklist is listed in the commented out section of
   /etc/multipath.conf.  If you comment out or delete the following lines in
   /etc/multipath.conf, the default blacklist takes effect:


   devnode_blacklist {
           devnode "*"
   }

 

   For some conditions, that may not be sufficient. If DM-MP is
   multipathing devices that you do not want it to work on, you can
   blacklist the devices by either device name or WWID.

   NOTE: It is safest to blacklist individual devices by WWID, because
         their device names may change.

   Several other configuration options are detailed later in this
   document. To check the  effects of configuration changes, you can
   do a dry run with the following command:

   # multipath -v2 -d

3. Set the multipathd init script to run at boot time. by issuing the commands

   # chkconfig --add multipathd
   # chkconfig multipathd on

4. start dm-multipath (This is only necessary the first time.  On
   reboot, this should happen automatically).

   # multipath
   # /etc/init.d/multipathd start

After initial setup, all access to the multipathed storage should go through the
multipath device.

GNBD devices will not be automatically multipathed after they are imported.
The command

# multipath

must be run after every time the devices are imported. Otherwise the Multipath
devices will not be created.

Configuration File:

Many features of DM-MP are configurable using the configuration file,
/etc/multipath.conf.

For a complete list of all options with descriptions, refer to
/usr/share/doc/device-mapper-multipathd-0.4.5/multipath.conf.annotated

The configuration file is divided into four sections: system defaults,
blacklisted devices (devnode_blacklist), per storage array model settings
(devices), and per multipath device settings (multipaths).  The per multipath
device settings are used for the multipath device with a matching "wwid"
value. The per storage array model settings are used for all multipath devices
with matching "vendor" and "product" values. To determine the attributes of a
multipath device, first the per multipath settings are checked, then the per
controller settings, then the system defaults.  The blacklisted device section
is described setup step 2.

NOTE: There are compiled-in defaults for the "defaults", "devnode_blacklist",
and "devices" sections of the configuration file. To see what these
are, refer to the following file:

/usr/share/doc/device-mapper-multipathd-0.4.5/multipath.conf.defaults

If you are using one of the storage arrays listed in the preceding
text (in "Overview"), you probably do not need to modify the "devices"
subsection. If you are using a simple disk enclosure, the defaults
should work. If you are using a storage array that is not
listed, you may need to create a "devices" subsection for your array.

Reconfiguring a running system:

If any changes are make to the configuration file that will effect multipathd
(check /usr/share/doc/device-mapper-multipathd-0.4.5/multipath.conf.annotated
to see if mulipathd is in the options scope), you must restart multipathd
for these to take effect. To do that, run

# /etc/init.d/multipathd restart

Explanation of output
-----------------------
When you create, modify, or list a multipath device, you get a printout of
the current device setup. The format is as follows.

For each multipath device:

action_if_any: alias (wwid_if_different_from_alias)
[size][features][hardware_handler]

For each path group:

\_ scheduling_policy [path_group_priority_if_known][path_group_status_if_known]


For each path:

 \_ host:channel:id:lun devnode major:minor [dm_status_if_known][path_status]

The dm status (dm_status_if_known) is like the path status
(path_status), but from the kernel's point of view.  The dm status has two
states: "failed", which is analogous to "faulty", and "active" which
covers all other path states. Occasionally, the path state and the 
dm state of a device will temporarily not agree.

NOTE: When a multipath device is being created or modified, the path group
status and the dm status are not known.  Also, the features are not always
correct. When a multipath device is being listed, the path group priority is not
known.

Restrictions
---------------
DM-MP cannot be run on either the root or boot device.

Other Sources of information
----------------------------
Configuration file explanation:
/usr/share/doc/device-mapper-multipathd-0.4.5/multipath.conf.annotated

Upstream documentation:
http://christophe.varoqui.free.fr/wiki/wakka.php?

wiki=Home

mailing list:
dm-devel@redhat.com
Subscribe to this from https://www.redhat.com/mailman/listinfo/dm-devel.
The list archives are at 
https://www.redhat.com/archives/dm-devel/

Man pages:
multipath.8, multipathd.8, kpartx.8 mpath_ctl.8

 







本文转自mfrbuaa博客园博客,原文链接:http://www.cnblogs.com/mfrbuaa/p/5199775.html,如需转载请自行联系原作者

相关文章
fetch上传文件报错的问题(multipart: NextPart: EOF)
技术栈 后台: gin(golang) 前端: react+antd+dva 问题 前端这边使用fetch发送http请求的时候,后端解析formData报错: multipart: NextPart: EOF 分析问题 原因是上传文件太小了Content-Length数量太小了,尝试将headers里这字段的value变大,发现实际的请求依然是较小值。
|
2月前
|
前端开发 Java
org.springframework.web.multipart.MultipartException: Current request is not a multipart request
org.springframework.web.multipart.MultipartException: Current request is not a multipart request
49 0
|
6月前
|
应用服务中间件 Linux
org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet request; nes
org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet request; nes
142 0
|
6月前
|
前端开发 JavaScript
Error_ Multipart_ Boundary not foun
Error_ Multipart_ Boundary not foun
96 0
|
存储 缓存 Java
【Java异常】org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet requ
【Java异常】org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet requ
405 0
|
定位技术
QT QHttpMultiPart上传总结
QT QHttpMultiPart上传总结
472 0
|
机器学习/深度学习 传感器 编解码
翻译:Multi-scale Multi-path Multi-model Fusion Nerwork
M3Net: 多尺度多路径多模型融合网络及其在 RGB-D 显着目标检测中的应用实例
190 0
1126. Eulerian Path (25)
#include #include #include using namespace std; vector v; vector visit; int cnt = 0;//cnt != n判断不是连通图 void df...
843 0