Cinder volume showing status as "in-use" but shows no attachment.

简介: Cinder volume showing status as "in-use" but shows no attachment.

Environment

  • Red Hat OpenStack Platform 13.0

Issue

  • Unable to attach a volume to a server as it's status shows in-use although there is no server attached to it. While attaching the volume to a server it fails with below error.
$ openstack server add volume a494a8de-1a7e-4a24-a3f2-a07e6c944987 6780193d-ba26-495b-9643-cf0fe98949d4
Invalid input received: Invalid volume: Volume 6780193d-ba26-495b-9643-cf0fe98949d4 status must be available or downloading (HTTP 400) (Request-ID: req-88c73bce-d44b-4536-8f5c-8c695472ea27) (HTTP 400) (Request-ID: req-c949c5a7-e7b5-4721-ae8e-0d0bd060edbc)

$ openstack volume list
+--------------------------------------+----------+-----------+------+-------------+
| ID                                   | Name     | Status    | Size | Attached to |
+--------------------------------------+----------+-----------+------+-------------+
| 6f52cf36-24b7-49e8-818e-8b1404a82563 | testvol2 | available |    1 |             |
| 6780193d-ba26-495b-9643-cf0fe98949d4 | testvol1 | in-use    |    1 |             |
+--------------------------------------+----------+-----------+------+-------------+

Resolution

  • Reset the volume state to available.
$ openstack volume set --state available <Volume_UUID>

Diagnostic Steps

  • Check the output of the command openstack  volume show  <Volume_UUID>. The attachments field should be empty.
$ openstack volume show 5c4bd497-7ff2-4d85-9abc-17225f355690
+--------------------------------+--------------------------------------------------+
| Field                          | Value                                            |
+--------------------------------+--------------------------------------------------+
| attachments                    | []                                               |
| availability_zone              | nova                                             |
| bootable                       | false                                            |
| consistencygroup_id            | None                                             |
| created_at                     | 2019-11-14T07:00:57.000000                       |
| description                    | WBPDS_DB_Data_Volume_Extend 500 GB               |
| encrypted                      | False                                            |
| id                             | 5c4bd497-7ff2-4d85-9abc-17225f355690             |
| migration_status               | None                                             |
| multiattach                    | False                                            |
| name                           | WBPDS_DB_Data_Volume_Extend                      |
| os-vol-host-attr:host          | hostgroup@wbsdc-OSP-Cinder#wbsdc_OSP_Prod_vol004 |
| os-vol-mig-status-attr:migstat | None                                             |
| os-vol-mig-status-attr:name_id | None                                             |
| os-vol-tenant-attr:tenant_id   | f21ec0d2fe9b46a29b9de993cdc9bb9e                 |
| properties                     |                                                  |
| replication_status             | None                                             |
| size                           | 500                                              |
| snapshot_id                    | None                                             |
| source_volid                   | None                                             |
| status                         | in-use                                           |
| type                           | netapp                                           |
| updated_at                     | 2019-11-18T06:28:43.000000                       |
| user_id                        | a41e1908cbf441f6947153a98713d396                 |
+--------------------------------+--------------------------------------------------+
  • Check the status of cinder service. cinder-volume service must be up and running.
$ openstack volume service list
+------------------+-------------------------+------+---------+-------+----------------------------+
| Binary           | Host                    | Zone | Status  | State | Updated At                 |
+------------------+-------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller-2            | nova | enabled | up    | 2020-02-07T06:55:02.000000 |
| cinder-scheduler | controller-1            | nova | enabled | up    | 2020-02-07T06:54:57.000000 |
| cinder-scheduler | controller-0            | nova | enabled | up    | 2020-02-07T06:55:03.000000 |
| cinder-volume    | hostgroup@tripleo_iscsi | nova | enabled | up    | 2020-02-07T06:54:58.000000 |
+------------------+-------------------------+------+---------+-------+----------------------------+
相关文章
|
2月前
|
Kubernetes 关系型数据库 MySQL
在 K8S Volume 中使用 subPath
在 K8S Volume 中使用 subPath
|
9月前
|
Kubernetes 容器 Perl
【kubernetes】解决: kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = faile...
【kubernetes】解决: kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = faile...
6741 0
|
Kubernetes Linux 容器
【kubernetes】修复 systemctl status sshd Failed to get D-Bus connection: Operation not permitted
【kubernetes】修复 systemctl status sshd Failed to get D-Bus connection: Operation not permitted
410 0
|
存储 Kubernetes 测试技术
emptyDir、hostPath以及local volume都是Kubernetes的本地存储卷,那么有何不同?
Kubernetes支持几十种类型的后端存储卷,其中有几种存储卷总是给人一种分不清楚它们之间有什么区别的感觉,尤其是local与hostPath这两种存储卷类型,看上去都像是node本地存储方案嘛。当然,还另有一种volume类型是emptyDir,也有相近之处。
2836 0
|
Kubernetes 容器 Perl
在k8s中安装flannel的故障解决: Failed to create SubnetManager: error retrieving pod spec for : the server does not allow access to the requested resource
花了一个上午来追踪问题,k8s都反复新建了十多次,docker都重启了几次。(一次显示不有获取磁盘空间,重启docker,清空存储解决) 在用kubeadm安装容器化的几个组件时,flannel组件死活不能启动,报如下问题: Failed to create SubnetManager: err...
12045 0
Invalid volume failure config value: 1
原因: hdfs-site.xml中的配置为: dfs.datanode.failed.volumes.tolerated 1 dfs.datanode.data.dir /datanode/data   dfs.datanode.data.dir只配了一个目录,并且dfs.datanode.failed.volumes.tolerated设置成了1。
1401 0