完整部署CentOS7.2+OpenStack+kvm 云平台环境(2)--云硬盘等后续配置

本文涉及的产品
.cn 域名,1个 12个月
云防火墙,500元 1000GB
简介:

 

上一篇博客介绍了完整部署CentOS7.2+OpenStack+kvm 云平台环境(1)--基础环境搭建,本篇继续讲述后续部分的内容 

1 虚拟机相关
1.1 虚拟机位置介绍

openstack上创建的虚拟机实例存放位置是/var/lib/nova/instances
如下,可以查看到虚拟机的ID
[root@linux-node2 ~]# nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 980fd600-a4e3-43c6-93a6-0f9dec3cc020 | kvm-server001 | ACTIVE | - | Running | flat=192.168.1.110 |
| e7e05369-910a-4dcf-8958-ee2b49d06135 | kvm-server002 | ACTIVE | - | Running | flat=192.168.1.111 |
| 3640ca6f-67d7-47ac-86e2-11f4a45cb705 | kvm-server003 | ACTIVE | - | Running | flat=192.168.1.112 |
| 8591baa5-88d4-401f-a982-d59dc2d14f8c | kvm-server004 | ACTIVE | - | Running | flat=192.168.1.113 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
[root@linux-node2 ~]# cd /var/lib/nova/instances/
[root@linux-node2 instances]# ll
total 8
drwxr-xr-x. 2 nova nova 85 Aug 30 17:16 3640ca6f-67d7-47ac-86e2-11f4a45cb705    #虚拟机的ID
drwxr-xr-x. 2 nova nova 85 Aug 30 17:17 8591baa5-88d4-401f-a982-d59dc2d14f8c
drwxr-xr-x. 2 nova nova 85 Aug 30 17:15 980fd600-a4e3-43c6-93a6-0f9dec3cc020
drwxr-xr-x. 2 nova nova 69 Aug 30 17:15 _base
-rw-r--r--. 1 nova nova 39 Aug 30 17:17 compute_nodes       #计算节点信息
drwxr-xr-x. 2 nova nova 85 Aug 30 17:15 e7e05369-910a-4dcf-8958-ee2b49d06135
drwxr-xr-x. 2 nova nova 4096 Aug 30 17:15 locks #锁

[root@linux-node2 instances]# cd 3640ca6f-67d7-47ac-86e2-11f4a45cb705/
[root@linux-node2 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# ll
total 6380
-rw-rw----. 1 qemu qemu 20856 Aug 30 17:17 console.log                    #vnc 的终端输出
-rw-r--r--. 1 qemu qemu 6356992 Aug 30 17:43 disk                           #虚拟磁盘(不是全部,有后端文件)
-rw-r--r--. 1 nova nova 162 Aug 30 17:16 disk.info                               #disk详情
-rw-r--r--. 1 qemu qemu 197120 Aug 30 17:16 disk.swap
-rw-r--r--. 1 nova nova 2910 Aug 30 17:16 libvirt.xml                           #xml 配置,此文件在虚拟机启动时动态生成的,改了也没卵用。

[root@linux-node2 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# file disk
disk: QEMU QCOW Image (v3), has backing file (path /var/lib/nova/instances/_base/378396c387dd437ec61d59627fb3fa9a6), 10737418240 bytes      #disk后端文件
[root@openstack-server 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# qemu-img info disk
image: disk
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 6.1M
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/378396c387dd437ec61d59627fb3fa9a67f857de
Format specific information:
compat: 1.1
lazy refcounts: false

disk 是写时复制的方式,后端文件不变,变动的文件放在 2.2M 的 disk 文件中,不变的在后端文件放置。 占用更小的空间。

2 安装配置 Horizon-dashboard(web 界面)
这个在http://www.cnblogs.com/kevingrace/p/5707003.html这篇中已经配置过了,这里再赘述一下吧:
dashboard 通过 api 来通信的

2.1 安装配置 dashboard
1、安装
[root@linux-node1 ~]# yum install -y openstack-dashboard
2、 修改配置文件
[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.1.17"                   #更改为keystone机器地址
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #默认的角色
ALLOWED_HOSTS = ['*']                                   #允许所有主机访问
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.1.17:11211',                   #连接memcached
}
}
#CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
# }
#}
TIME_ZONE = "Asia/Shanghai"                       #设置时区

重启 httpd 服务
[root@linux-node1 ~]# systemctl restart httpd

web 界面登录访问dashboard
http://58.68.250.17/dashboard/
用户密码 demo 或者 admin(管理员)

3 虚拟机创建流程(非常重要)

第一阶段:
1、用户通过 Dashboard 或者命令行,发送用户名密码给 Keystone 进行验证,验证成功后,
返回 OS_TOKEN(令牌)
2、 Dashboard 或者命令行访问 nova-api,我要创建虚拟机
3、 nova-api 去找 keystone 验证确认。

第二阶段: nova 之间的组件交互
4、 nova-api 和 nova 数据库进行交互,记录
5-6、 nova-api 通过消息队列讲信息发送给 nova-scheduler
7、 nova-scheduler 收到消息后,和数据库进行交互,自己进行调度
8、 nova-scheduler 通过消息队列将信息发送给 nova-compute
9-11、 nova-compute 通过消息队列和 nova-conductor 通信,通过 nova-conductor 和数据库进
行交互,获取相关信息。(图上有点问题), nova-conductor 就是专门和数据库进行通信的。

第三阶段:
12、 nova-compute 发起 api 调用 Glance 获取镜像。
13、 Glance 去找 keystone 认证,认证成功后将镜像给 nova-compute
14、 nova-compute 找 Neutron 获取网络
15、 Neutron 去找 keystone 认证,认证后为 nova-compute 提供网络
16-17 同理

第四阶段:
nova-compute 通过 libvirt 调用 kvm 生成虚拟机
18.、 nova-compute 和底层的 hypervisor 进行交互,如果是使用的 kvm,则通过 libvirt 调用
kvm 去创建虚拟机,创建过程中 nova-api 会一直去数据库轮询查看虚拟机创建状态。

************************************************************************************************* 

细节:
 新的计算节点第一次创建虚拟机会慢
因为 glance 需要把镜像上传到计算节点上,即_bash 目录下,之后才会创建虚拟机
[root@linux-node2 _base]# pwd
/var/lib/nova/instances/_base
[root@openstack-server _base]# ll
total 10485764
-rw-r--r--. 1 nova qemu 10737418240 Aug 30 17:57 378396c387dd437ec61d59627fb3fa9a67f857de
-rw-r--r--. 1 nova qemu 1048576000 Aug 30 17:57 swap_1000

第一个虚拟机创建后,后续在创建其他的虚拟机时就快很多了。
创建虚拟机操作,具体见:
http://www.cnblogs.com/kevingrace/p/5707003.html

************************************************************************************************

4 cinder 云存储服务
4.1 存储的分类
1、块存储
磁盘
2、文件存储
nfs
3、对象存储

4.2 cinder 介绍
云硬盘

一般 cinder-api 和 cinder-scheduler 安装在控制节点上, cinder-volume 安装在存储节点上。

4.3 cinder 控制节点配置
1、安装软件包
控制节点
[root@linux-node1 ~]#yum install -y openstack-cinder python-cinderclient
计算节点
[root@linux-node2 ~]#yum install -y openstack-cinder python-cinderclient
2、 创建 cinder 的数据库
之前的一篇中已经创建了:http://www.cnblogs.com/kevingrace/p/5707003.html

3、修改配置文件
[root@linux-node1 ~]# cat /etc/cinder/cinder.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
glance_host = 192.168.1.17
auth_strategy = keystone
rpc_backend = rabbit
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection = mysql://cinder:cinder@192.168.1.17/cinder
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 192.168.1.17
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]

在 nova 配置文件中添加
[root@linux-node1 ~]# vim /etc/nova/nova.conf
os_region_name=RegionOne                      #在[cinder]区域里添加

4、同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
..................
2016-08-30 18:27:20.204 67111 INFO migrate.versioning.api [-] done
2016-08-30 18:27:20.204 67111 INFO migrate.versioning.api [-] 59 -> 60... 
2016-08-30 18:27:20.208 67111 INFO migrate.versioning.api [-] done

5、 创建 keystone 用户
[root@linux-node1 ~]# cd /usr/local/src/
[root@linux-node1 src]# source admin-openrc.sh
[root@linux-node1 src]# openstack user create --domain default --password-prompt cinder
User Password: #这里我设置的是cinder
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 955a2e684bed4617880942acd69e1073 |
| name | cinder |
+-----------+----------------------------------+
[root@openstack-server src]# openstack role add --project service --user cinder admin

6、启动服务
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
[root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

7、在 keystone 上创建服务并注册
v1 和 v2 都要注册
[root@linux-node1 src]# source admin-openrc.sh 
[root@linux-node1 src]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 7626bd9be54a444589ae9f8f8d29dc7b |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@linux-node1 src]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 5680a0ce912b484db88378027b1f6863 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volume public http://192.168.1.17:8776/v1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 10de5ed237d54452817e19fd65233ae6 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7626bd9be54a444589ae9f8f8d29dc7b |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.1.17:8776/v1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volume internal http://192.168.1.17:8776/v1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | f706552cfb40471abf5d16667fc5d629 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7626bd9be54a444589ae9f8f8d29dc7b |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.1.17:8776/v1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volume admin http://192.168.1.17:8776/v1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | c9dfa19aca3c43b5b0cf2fe7d393efce |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7626bd9be54a444589ae9f8f8d29dc7b |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.1.17:8776/v1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 public http://192.168.1.17:8776/v2/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 9ac83d0fab134f889e972e4e7680b0e6 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5680a0ce912b484db88378027b1f6863 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.1.17:8776/v2/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.1.17:8776/v2/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 9d18eac0868b4c49ae8f6198a029d7e0 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5680a0ce912b484db88378027b1f6863 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.1.17:8776/v2/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.1.17:8776/v2/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 68c93bd6cd0f4f5ca6d5a048acbddc91 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5680a0ce912b484db88378027b1f6863 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.1.17:8776/v2/%(tenant_id)s |
+--------------+-------------------------------------------+

查看注册信息:
[root@linux-node1 src]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+
| 02fed35802734518922d0ca2d672f469 | RegionOne | keystone | identity | True | internal | http://192.168.1.17:5000/v2.0 |
| 10de5ed237d54452817e19fd65233ae6 | RegionOne | cinder | volume | True | public | http://192.168.1.17:8776/v1/%(tenant_id)s |
| 1a3115941ff54b7499a800c7c43ee92a | RegionOne | nova | compute | True | internal | http://192.168.1.17:8774/v2/%(tenant_id)s |
| 31fbf72537a14ba7927fe9c7b7d06a65 | RegionOne | glance | image | True | admin | http://192.168.1.17:9292 |
| 5278f33a42754c9a8d90937932b8c0b3 | RegionOne | nova | compute | True | admin | http://192.168.1.17:8774/v2/%(tenant_id)s |
| 52b0a1a700f04773a220ff0e365dea45 | RegionOne | keystone | identity | True | public | http://192.168.1.17:5000/v2.0 |
| 68c93bd6cd0f4f5ca6d5a048acbddc91 | RegionOne | cinderv2 | volumev2 | True | admin | http://192.168.1.17:8776/v2/%(tenant_id)s |
| 88df7df6427d45619df192979219e65c | RegionOne | keystone | identity | True | admin | http://192.168.1.17:35357/v2.0 |
| 8c4fa7b9a24949c5882949d13d161d36 | RegionOne | nova | compute | True | public | http://192.168.1.17:8774/v2/%(tenant_id)s |
| 9ac83d0fab134f889e972e4e7680b0e6 | RegionOne | cinderv2 | volumev2 | True | public | http://192.168.1.17:8776/v2/%(tenant_id)s |
| 9d18eac0868b4c49ae8f6198a029d7e0 | RegionOne | cinderv2 | volumev2 | True | internal | http://192.168.1.17:8776/v2/%(tenant_id)s |
| be788b4aa2ce4251b424a3182d0eea11 | RegionOne | glance | image | True | public | http://192.168.1.17:9292 |
| c059a07fa3e141a0a0b7fc2f46ca922c | RegionOne | neutron | network | True | public | http://192.168.1.17:9696 |
| c9dfa19aca3c43b5b0cf2fe7d393efce | RegionOne | cinder | volume | True | admin | http://192.168.1.17:8776/v1/%(tenant_id)s |
| d0052712051a4f04bb59c06e2d5b2a0b | RegionOne | glance | image | True | internal | http://192.168.1.17:9292 |
| ea325a8a2e6e4165997b2e24a8948469 | RegionOne | neutron | network | True | internal | http://192.168.1.17:9696 |
| f706552cfb40471abf5d16667fc5d629 | RegionOne | cinder | volume | True | internal | http://192.168.1.17:8776/v1/%(tenant_id)s |
| ffdec11ccf024240931e8ca548876ef0 | RegionOne | neutron | network | True | admin | http://192.168.1.17:9696 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+

4.4 cinder 存储节点配置
1、 使用 ISCSI 方式创建云硬盘
计算节点添加硬盘并创建 VG

[root@linux-node2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 100G 44G 57G 44% /
devtmpfs 10G 0 10G 0% /dev
tmpfs 10G 0 10G 0% /dev/shm
tmpfs 10G 90M 10G 1% /run
tmpfs 10G 0 10G 0% /sys/fs/cgroup
/dev/sda1 197M 127M 71M 65% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
/dev/sda5 811G 33M 811G 1% /home

由于这里我的计算节点上没有多余的硬盘和空间了
所以考虑将上面的home分区卸载,拿来做云硬盘

卸载home分区前,将home分区下的数据备份。
等到home卸载后,再创建/home目录,将备份数据拷贝到/home下

[root@linux-node2 ~]# umount /home
[root@linux-node2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 100G 44G 57G 44% /
devtmpfs 10G 0 10G 0% /dev
tmpfs 10G 0 10G 0% /dev/shm
tmpfs 10G 90M 10G 1% /run
tmpfs 10G 0 10G 0% /sys/fs/cgroup
/dev/sda1 197M 127M 71M 65% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0

[root@linux-node2 ~]# fdisk -l

Disk /dev/sda: 999.7 GB, 999653638144 bytes, 1952448512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b2db8

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 411647 204800 83 Linux
/dev/sda2 411648 210126847 104857600 83 Linux
/dev/sda3 210126848 252069887 20971520 82 Linux swap / Solaris
/dev/sda4 252069888 1952448511 850189312 5 Extended
/dev/sda5 252071936 1952448511 850188288 83 Linux

这样,home分区卸载的/dev/sda5可以拿来做lvm

[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
filter = [ "a/sda5/", "r/.*/"]

其中:a 表示同意, r 是不同意

---------------------------------------------------------------------------------------------------------
上面的home分区没有做lvm,设备名是/dev/sda5,则/etc/lvm/lvm.conf可以如上设置。

如果home分区做了lvm,“df -h”命令查看home分区的设备名比如是/dev/mapper/centos-home
那么/etc/lvm/lvm.conf这里就要这样配置了:
filter = [ "a|^/dev/mapper/centos-home$|", "r|.*/|" ]
--------------------------------------------------------------------------------------------------------

[root@linux-node2 ~]# pvcreate /dev/sda5
WARNING: xfs signature detected on /dev/sda5 at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/sda5.
Physical volume "/dev/sda5" successfully created
[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sda5
Volume group "cinder-volumes" successfully created

2、修改配置文件
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.1.8:/etc/cinder/cinder.conf
需要更改
[root@linux-node2 ~]# vim /etc/cinder/cinder.conf   
enabled_backends = lvm              #在[DEFAULT]区域添加

[lvm]                      #文件底部添加lvm区域设置
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

3、启动服务
[root@linux-node2 ~]#systemctl enable openstack-cinder-volume.service target.service
[root@linux-node2 ~]#systemctl start openstack-cinder-volume.service target.service

4.5 创建云硬盘
1、在控制节点上检查
时间不同步可能会出现 down 的状态,
[root@linux-node1 ~]# systemctl restart chronyd
[root@linux-node1 ~]# source admin-openrc.sh
[root@openstack-server ~]# cinder service-list

+------------------+----------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | openstack-server | nova | enabled | up | 2016-08-31T07:50:06.000000 | - |
| cinder-volume    | openstack-server@lvm | nova | enabled | up | 2016-08-31T07:50:08.000000 | - |
+------------------+----------------------+------+---------+-------+----------------------------+-----------------+

--------------------------------------------------------
这个时候,退出openstack的dashboard,再次登录!
就可以在左侧栏的“计算”里看见“云硬盘”了

--------------------------------------------------------

2、使用 dashboard 创建云硬盘

(注意:可以利用已有的虚拟机做快照(快照做好后,这台做快照的虚拟机就会关机,需要之后再手动启动),然后就能利用快照进行创建/启动虚拟机)

(注意:通过快照创建的虚拟机,默认是没有ip的,需要做下修改。修改参考另一篇博客webvirtmgr中克隆虚拟机后的修改方法:http://www.cnblogs.com/kevingrace/p/5822928.html)

此时可以在计算节点上查看到:
[root@linux-node2 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-efb1d119-e006-41a8-b695-0af9f8d35063
LV Name volume-efb1d119-e006-41a8-b695-0af9f8d35063
VG Name cinder-volumes
LV UUID aYztLC-jljz-esGh-UTco-KxtG-ipce-Oinx9j
LV Write Access read/write
LV Creation host, time openstack-server, 2016-08-31 15:55:05 +0800
LV Status available
# open 0
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

下面可将创建的云硬盘挂载到相应的虚拟机上了!

登陆虚拟机kvm-server001查看,就能发现挂载的云硬盘了。挂载就能直接用了。

[root@kvm-server001 ~]# fdisk -l

Disk /dev/vda: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00046e27

..............

Disk /dev/vdc: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

格式化连接过来的云硬盘
[root@kvm-server001 ~]# mkfs.ext4 /dev/vdc
mke2fs 1.41.12 (17-May-2010)
............
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

创建挂载目录/data
[root@kvm-server001 ~]# mkdir /data

然后挂载
[root@kvm-server001 ~]# mount /dev/vdc /data
[root@kvm-server001 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 8.2G 737M 7.1G 10% /
tmpfs 2.9G 0 2.9G 0% /dev/shm
/dev/vda1 194M 28M 156M 16% /boot
/dev/vdc 50G 180M 47G 1% /data

----------------------------------特别说明下----------------------------------------------------------
由于制作的虚拟机的根分区很小,可以把挂载的云硬盘制作成lvm,扩容到根分区上(根分区也是lvm)

操作记录如下:
[root@localhost ~]# fdisk -l
............
............
Disk /dev/vdc: 161.1 GB, 161061273600 bytes #这是挂载的云硬盘
16 heads, 63 sectors/track, 312076 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
8.1G 664M 7.0G 9% /                               #vm的根分区,可以进行手动lvm扩容
tmpfs 2.9G 0 2.9G 0% /dev/shm
/dev/vda1 190M 37M 143M 21% /boot

首先将挂载下来的云硬盘制作新分区
[root@localhost ~]# fdisk /dev/vdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x3256d3cb.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): p

Disk /dev/vdc: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3256d3cb

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-312076, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-312076, default 312076): 
Using default value 312076

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost ~]# fdisk /dev/vdc

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): p

Disk /dev/vdc: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3256d3cb

Device Boot Start End Blocks Id System
/dev/vdc1 1 312076 157286272+ 83 Linux

开始进行根分区的lvm扩容:
[root@localhost ~]# pvcreate /dev/vdc1
Physical volume "/dev/vdc1" successfully created

[root@localhost ~]# lvdisplay 
--- Logical volume ---
LV Path /dev/VolGroup00/LogVol01
LV Name LogVol01
VG Name VolGroup00
LV UUID xtykaQ-3ulO-XtF0-BUqB-Pure-LH1n-O2zF1Z
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2016-09-05 22:21:00 -0400
LV Status available
# open 1
LV Size 1.50 GiB
Current LE 48
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/VolGroup00/LogVol00    #这是虚拟机的根分区的lvm逻辑卷,就是给这个扩容
LV Name LogVol00
VG Name VolGroup00
LV UUID 7BW8Wm-4VSt-5GzO-sIew-D1OI-pqLP-eXgM80
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2016-09-05 22:21:00 -0400
LV Status available
# open 1
LV Size 8.28 GiB
Current LE 265
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

[root@localhost ~]# vgdisplay 
--- Volume group ---
VG Name VolGroup00
System ID 
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.78 GiB
PE Size 32.00 MiB
Total PE 313
Alloc PE / Size 313 / 9.78 GiB
Free PE / Size 0 / 0                                                               #VolGroup00这个卷组没有剩余空间了,需要vg进行自身扩容
VG UUID tEEreQ-O2HZ-rm9d-vS8Y-VemY-D7uY-qAYdWU

[root@localhost ~]# vgextend VolGroup00 /dev/vdc1                #vg扩容
Volume group "VolGroup00" successfully extended

[root@localhost ~]# vgdisplay #vg扩容后再次查看
--- Volume group ---
VG Name VolGroup00
System ID 
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 159.75 GiB
PE Size 32.00 MiB
Total PE 5112
Alloc PE / Size 313 / 9.78 GiB
Free PE / Size 4799 / 149.97 GiB #发现剩余空间有了149.97G
VG UUID tEEreQ-O2HZ-rm9d-vS8Y-VemY-D7uY-qAYdWU

在上面查询可知的vg所有的剩余空间全部增加给逻辑卷/dev/VolGroup00/LogVol00
[root@localhost ~]# lvextend -l +4799 /dev/VolGroup00/LogVol00
Size of logical volume VolGroup00/LogVol00 changed from 8.28 GiB (265 extents) to 158.25 GiB (5064 extents).
Logical volume LogVol00 successfully resized.

修改逻辑卷大小后,通过resize2fs来修改文件系统的大小
[root@localhost ~]# resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 10
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 41484288 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 41484288 blocks long.

再次查看,根分区已经扩容了!!
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
156G 676M 148G 1% /
tmpfs 2.9G 0 2.9G 0% /dev/shm
/dev/vda1 190M 37M 143M 21% /boot
--------------------------------------------------------------------------------------------

****************************************************************************************

云硬盘添加是热添加

注意:
虚拟机上发现的云硬盘格式化并挂载到如/data目录下

删除云硬盘需要先卸载【不仅在虚拟机上卸载,在dashboard界面里也要卸载】

-----------------------------------------------------------------------------------------------------------------------------------

可以在虚拟机上对连接的云硬盘做lvm逻辑卷,以便以后不够用时,可以再加硬盘做lvm扩容,无缝扩容!

如下,虚拟机kvm-server001连接了一块100G的云硬盘
现对这100G的硬盘分区,制作lvm
[root@kvm-server001 ~]# fdisk -l

Disk /dev/vda: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00046e27
...........................

Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

先制作分区
[root@kvm-server001 ~]# fdisk /dev/vdc 
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x4e0d7808.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Command (m for help): p

Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4e0d7808

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-208050, default 1):                                #回车
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-208050, default 208050):        #回车,即使用全部剩余空间创建新分区
Using default value 208050

Command (m for help): p

Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4e0d7808

Device Boot Start End Blocks Id System
/dev/vdc1 1 208050 104857168+ 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@kvm-server001 ~]# pvcreate /dev/vdc1                         #制作pv
Physical volume "/dev/vdc1" successfully created
[root@kvm-server001 ~]# vgcreate vg0 /dev/vdc1                   #制作vg
Volume group "vg0" successfully created
[root@kvm-server001 ~]# vgdisplay                                       #查看vg大小
--- Volume group ---
VG Name vg0
System ID 
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 100.00 GiB
PE Size 4.00 MiB
Total PE 25599
Alloc PE / Size 0 / 0 
Free PE / Size 25599 / 100.00 GiB
VG UUID UIsTAe-oUzt-3atO-PVTw-0JUL-7Z8s-XVppIH

[root@kvm-server001 ~]# lvcreate -L +99.99G -n lv0 vg0                  #lv逻辑卷大小不能超过vg大小
Rounding up size to full physical extent 99.99 GiB
Logical volume "lv0" created
[root@kvm-server001 ~]# mkfs.ext4 /dev/vg0/lv0                            #格式化lvm逻辑卷
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26212352 blocks
1310617 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done 
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@kvm-server001 ~]# mkdir /data                                                              #创建挂载目录
[root@kvm-server001 ~]# mount /dev/vg0/lv0 /data                                          #挂载lvm
[root@kvm-server001 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 8.2G 842M 7.0G 11% /
tmpfs 2.9G 0 2.9G 0% /dev/shm
/dev/vda1 194M 28M 156M 16% /boot
/dev/mapper/vg0-lv0 99G 188M 94G 1% /data

****************************************************************************************

背景:
由于计算节点内网网关不存在,所以vm不能通过桥接模式自行联网了。
要想使安装后的vm联网,还需要我们手动进行些特殊配置:
(1)计算节点部署squid代理环境,即vm对外的访问请求通过计算节点机squid代理出去。
(2)vm对内的访问请求通过计算节点的iptables进行nat端口转发,web应用请求可以利用nginx或haproxy进行代理转发。

---------------------------------------------------------------------------------------------------------
下面说的是http方式的squid代理;
如果是https的squid代理,可以参考我的另一篇技术博客内容:
http://www.cnblogs.com/kevingrace/p/5853199.html
---------------------------------------------------------------------------------------------------------

(1)

1)计算节点上的操作:
yum命令直接在线安装squid
[root@linux-node2 ~]# yum install squid
安装完成后,修改squid.conf 文件中的内容,修改之前可以先备份该文件
[root@linux-node2 ~]# cd /etc/squid/
[root@linux-node2 squid]# cp squid.conf squid.conf_bak
[root@linux-node2 squid]# vim squid.conf
http_access allow all
http_port 192.168.1.17:3128
cache_dir ufs /var/spool/squid 100 16 256

然后执行下面命令,进行squid启动前测试
[root@linux-node2 squid]# squid -k parse
2016/08/31 16:53:36| Startup: Initializing Authentication Schemes ...
..............
2016/08/31 16:53:36| Initializing https proxy context

在第一次启动之前或者修改了cache路径之后,需要重新初始化cache目录。
[root@kvm-linux-node2 squid]# squid -z
2016/08/31 16:59:21 kid1| /var/spool/squid exists
2016/08/31 16:59:21 kid1| Making directories in /var/spool/squid/00
................

--------------------------------------------------------------------------------
如果有下面报错:
2016/09/06 15:19:23 kid1| No cache_dir stores are configured.

解决办法:
# vim squid.conf
cache_dir ufs /var/spool/squid 100 16 256 #打开这行的注释

#ll /var/spool/squid 确保这个目录存在

再次squid -z初始化就ok了
--------------------------------------------------------------------------------

[root@kvm-linux-node2 squid]# systemctl enable squid
Created symlink from /etc/systemd/system/multi-user.target.wants/squid.service to /usr/lib/systemd/system/squid.service.
[root@kvm-server001 squid]# systemctl start squid
[root@kvm-server001 squid]# lsof -i:3128
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
squid 62262 squid 16u IPv4 4275294 0t0 TCP openstack-server:squid (LISTEN)

如果计算节点开启了iptables防火墙规则
这里我的centos7.2系统上设置了iptables(关闭默认的firewalle)
则还需要在/etc/sysconfig/iptables里添加下面一行:
-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT

我这里防火墙配置如下:
[root@linux-node2 squid]# cat /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

然后重启iptables服务
[root@linux-node2 ~]# systemctl restart iptables.service          #最后重启防火墙使配置生效
[root@linux-node2 ~]# systemctl enable iptables.service          #设置防火墙开机启动

-----------------------------------------------
2)下面是虚拟机上的squid配置:

只需要在系统环境变量配置文件/etc/profile里添加下面一行即可(在文件底部添加)
[root@kvm-server001 ~]# vim /etc/profile 
.......
export http_proxy=http://192.168.1.17:3128

[root@kvm-server001 ~]# source /etc/profile                          #使上面的配置生效

测试虚拟机是否能对外访问:

[root@kvm-server001 ~]# curl http://www.baidu.com                                    #能正常对外访问

[root@kvm-server001 ~]# yum list                                                               #yum能正常在线使用

[root@kvm-server001 ~]# wget http://my.oschina.net/mingpeng/blog/293744 #能正常在线下载

这样,虚拟机的对外请求就可以通过squid顺利代理出去了!

这里,squid代理的是http方式,如果是https方式的squid代理,可以参考我的另一篇博客:http://www.cnblogs.com/kevingrace/p/5853199.html

***********************************************

(2)

1)下面说下虚拟机的对内请求的代理配置:

NAT端口转发,可以参考我的另一篇博客内容:http://www.cnblogs.com/kevingrace/p/5753193.html

在计算节点(即虚拟机的宿主机)上配置iptables规则:

[root@linux-node2 ~]# cat iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT             #开放squid代理端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT                                           #开放dashboard访问端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT                                       #开放控制台vnc访问端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 15672 -j ACCEPT                                     #开放RabbitMQ访问端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT
#-A INPUT -j REJECT --reject-with icmp-host-prohibited                                                          #注意,这两行要注释掉!不然,开启这两行后,虚拟机之间就相互ping不通了!
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

--------------------------------------------------------------------------------------------------------------------------------
说明:
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
这两条的意思是在INPUT表和FORWARD表中拒绝所有其他不符合上述任何一条规则的数据包。并且发送一条host prohibited的消息给被拒绝的主机。
这个是iptables的默认策略,可以删除这两行,并且配置符合自己需求的策略。

这两行策略开启后,宿主机和虚拟机之间的ping无阻碍
但虚拟机之间就相互ping不通了,因为vm之间ping要经过宿主机,这两条规则阻碍了他们之间的通信!删除即可~
--------------------------------------------------------------------------------------------------------------------------------

重启虚拟机
这样,开启防火墙后,宿主机和虚拟机,虚拟机之间都可以相互ping通~
[root@linux-node2 ~]# systemctl restart iptables.service

************************************************************************************************************************
openstack私有云环境,在一个计算节点上创建的虚拟机,其实就是一个局域网内的机器群了。
如上述在宿主机上开启防火墙,一番设置后,虚拟机和宿主机之间/同一个节点下的虚拟机之间/虚拟机和宿主机同一内网段内的机器之间都是可以相互连接的,即能相互ping通
************************************************************************************************************************

2)虚拟机的web应用的代理部署

两种方案(宿主机上部署nginx或haproxy):

a.采用nginx的反向代理。即将各个域名解析到宿主机ip,在nginx的vhost里配置,然后通过proxy_pass代理转发到虚拟机上。

b.采用haproxy代理。也是将各个域名解析到宿主机ip,然后通过域名进行转发规则的设置。

这样,就能保证通过宿主机的80端口,将各个域名的访问请求转发给相应的虚拟机了。

nginx反向代理,可以参考下面两篇博客:

http://www.cnblogs.com/kevingrace/p/5839698.html

http://www.cnblogs.com/kevingrace/p/5865501.html

*****************************************************************

nginx反向代理思路:
在宿主机上启动nginx的80端口,根据不通域名进行转发;后端的虚拟机上vhost下不同域名的配置要启用不同的端口了~

比如:
在宿主机上下面两个域名的代理配置(其他域名配置同理)

[root@linux-node1 vhosts]# cat www.world.com.conf 
upstream 8080 {
server 192.168.1.150:8080;
}

server {
listen 80;
server_name www.world.com;
location / {
proxy_store off;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://8080; 
}
}

[root@linux-node1 vhosts]# cat www.tech.com.conf 
upstream 8081 {
server 192.168.1.150:8081;
}

server {
listen 80;
server_name www.tech.com;
location / {
proxy_store off;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://8081; 
}
}

即www.world.com和www.tech.com域名都解析到宿主机的公网ip上,然后:
访问http://www.world.com的请求就被宿主机代理到后端虚拟机192.168.1.150的8080端口上,即在虚拟机上这个域名配置的是8080端口;
访问http://www.tech.com的请求就被宿主机代理到后端虚拟机192.168.1.150的8081端口上,即在虚拟机上这个域名配置的是8081端口;

要是后端虚拟机配置了多个域名,那么其他域名的配置和上面是一样的~~~

另外:
最好在代理服务器后端真实服务器上做host映射(/etc/hosts文件里将各个域名指定对应到127.0.0.1),不然,可能代理后访问域名有问题~~

---------------------------------------------------------------------------------------------
由于宿主机上做web应用的代理转发,需要用到80端口。
80端口已被dashboard占用,这里需要修改下dashboard的访问端口,比如改为8080端口
则需要做如下修改:
1)vim /etc/httpd/conf/httpd.conf
将80端口修改为8080端口

Listen 8080
ServerName 192.168.1.8:8080
2)vim /etc/openstack-dashboard/local_settings #将下面两处的端口由80改为8080
'from_port': '8080',
'to_port': '8080',
3)防火墙添加8080端口访问规则
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

然后重启http服务:
#systemctl restart httpd

这样,dashboard访问url:
http://58.68.250.17:8080/dashboard
---------------------------------------------------------------------------------------------

***************当你发现自己的才华撑不起野心时,就请安静下来学习吧***************


本文转自散尽浮华博客园博客,原文链接:http://www.cnblogs.com/kevingrace/p/5822928.html,如需转载请自行联系原作者
目录
相关文章
|
3月前
|
监控 前端开发 Linux
centos7系统安装部署zabbix5.0
【9月更文挑战第23天】在CentOS 7系统上部署Zabbix 5.0的步骤包括:安装MariaDB数据库及必要软件包,配置Zabbix仓库,设置数据库并导入Zabbix数据库架构,配置Zabbix服务器与前端参数,启动相关服务,并通过浏览器访问Web界面完成安装向导。
235 0
|
1月前
|
Oracle 关系型数据库 MySQL
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
|
3月前
|
Oracle Java 关系型数据库
CentOS 7.6操作系统部署JDK实战案例
这篇文章介绍了在CentOS 7.6操作系统上通过多种方式部署JDK的详细步骤,包括使用yum安装openjdk、基于rpm包和二进制包安装Oracle JDK,并提供了配置环境变量的方法。
302 80
|
2月前
|
存储 Linux 开发者
虚拟机centos7.9一键部署docker
本文介绍了如何在 CentOS 7.9 虚拟机上安装 Docker 社区版 (Docker-ce-20.10.20)。通过使用阿里云镜像源,利用 `wget` 下载并配置 Docker-ce 的 YUM 仓库文件,然后通过 `yum` 命令完成安装。安装后,通过 `systemctl` 设置 Docker 开机自启并启动 Docker 服务。最后,使用 `docker version` 验证安装成功,并展示了客户端与服务器的版本信息。文中还提供了列出所有可用 Docker-ce 版本的命令。
252 0
虚拟机centos7.9一键部署docker
|
3月前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
647 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
3月前
|
Linux pouch 容器
CentOS7部署阿里巴巴开源的pouch容器管理工具实战
关于如何在CentOS 7.6操作系统上安装和使用阿里巴巴开源的Pouch容器管理工具的实战教程。
141 2
|
3月前
OpenStack技术栈-OpenStack环境初始化
文章介绍了如何配置网卡bond模式,搭建时间同步服务器,并提供了OpenStack环境初始化的步骤和建议。
72 1
|
2月前
|
安全 Linux 数据库连接
CentOS 7环境下DM8数据库的安装与配置
【10月更文挑战第16天】本文介绍了在 CentOS 7 环境下安装与配置达梦数据库(DM8)的详细步骤,包括安装前准备、创建安装用户、上传安装文件、解压并运行安装程序、初始化数据库实例、配置环境变量、启动数据库服务、配置数据库连接和参数、备份与恢复、以及安装后的安全设置、性能优化和定期维护等内容。通过这些步骤,可以顺利完成 DM8 的安装与配置。
373 0
|
4月前
|
机器学习/深度学习 文字识别 Linux
百度飞桨(PaddlePaddle) - PP-OCRv3 文字检测识别系统 基于 Paddle Serving快速使用(服务化部署 - CentOS 7)
百度飞桨(PaddlePaddle) - PP-OCRv3 文字检测识别系统 基于 Paddle Serving快速使用(服务化部署 - CentOS 7)
116 1
百度飞桨(PaddlePaddle) - PP-OCRv3 文字检测识别系统 基于 Paddle Serving快速使用(服务化部署 - CentOS 7)
|
3月前
|
Kubernetes Linux API
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
该博客文章详细介绍了在CentOS 7.6操作系统上使用kubeadm工具部署kubernetes 1.17.2版本的测试集群的过程,包括主机环境准备、安装Docker、配置kubelet、初始化集群、添加节点、部署网络插件以及配置k8s node节点管理api server服务器。
161 0
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇