一 控制节点服务状态
1
2
3
4
5
6
7
8
|
[root@linux-node1 ~]
# nova service-list
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| 3 | nova-consoleauth | linux-node1 | internal | enabled | up | 2017-01-02T07:33:06.000000 | - |
| 4 | nova-conductor | linux-node1 | internal | enabled | up | 2017-01-02T07:33:06.000000 | - |
| 5 | nova-scheduler | linux-node1 | internal | enabled | up | 2017-01-02T07:33:05.000000 | - |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
|
二 安装服务软件包
2.1 安装库包
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
1.
安装仓库:
yum
install
-y centos-release-openstack-newton
修改
openstack
源的地址为阿里云的地址(提高
yum
下载的速度)
sed
-i
"s#mirror.centos.org#mirrors.aliyun.com#g"
/etc/yum
.repos.d
/CentOS-OpenStack-newton
.repo
2.
安装
OpenStack
客户端:
yum
install
-y python-openstackclient
yum
install
-y openstack-selinux
rpm -qa python-openstackclient openstack-selinux
|
2.2 安装服务软件包
1
|
yum
install
openstack-nova-compute
|
2.2.1 安装报错
1
2
|
Error downloading packages:
1:librados2-10.2.2-0.el7.x86_64: [Errno 256] No
more
mirrors to try.
|
解放方法:
1
2
3
4
5
6
7
8
9
10
11
|
[root@linux-node2 yum.repos.d]
# cat CentOS-Ceph-Jewel.repo
# CentOS-Ceph-Jewel.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information
[centos-ceph-jewel]
name=CentOS-$releasever - Ceph Jewel
baseurl=http:
//mirror
.centos.org
/centos/
$releasever
/storage/
$basearch
/ceph-jewel/
gpgcheck=0
#这里全部改成0
enabled=0
#这里全部改成0
gpgkey=
file
:
///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
|
也可以从下面的链接下载包来安装
http://ftp.riken.jp/Linux/centos/7/storage/x86_64/ceph-jewel/
2.2.2 修改配置文件
这里很多配置和node1上面的nova配置文件一样,所以我们先把node1上面的配置文件拷贝过来,然后再修改配置,具体执行过程如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
[root@linux-node1 yum.repos.d]
# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/
root@192.168.56.12's password:
nova.conf
查看权限是否正确
[root@linux-node2 ~]
# ll /etc/nova/nova.conf
-rw-r----- 1 root nova 290129 Dec 25 10:32
/etc/nova/nova
.conf
修改配置文件
[root@linux-node2 ~]
# vi /etc/nova/nova.conf
删除2行mysql的配置文件
connection=mysql+pymysql:
//nova
:nova@192.168.56.11
/nova_api
connection=mysql+pymysql:
//nova
:nova@192.168.56.11
/nova
全部的配置如下:
[root@linux-node2 nova]
# grep -n '^[a-z]' nova.conf
2:transport_url=rabbit:
//openstack
:openstack@192.168.56.11
15:auth_strategy=keystone
2063:use_neutron=True
3053:enabled_apis=osapi_compute,metadata
3267:firewall_driver = nova.virt.firewall.NoopFirewallDriver
4813:api_servers=http:
//192
.168.56.11:9292
5430:auth_uri = http:
//192
.168.56.11:5000
5431:auth_url = http:
//192
.168.56.11:35357
5432:memcached_servers = 192.168.56.11:11211
5433:auth_type = password
5434:project_domain_name = default
5435:user_domain_name = default
5436:project_name = service
5437:username = nova
5438:password = nova
6470:url = http:
//192
.168.56.11:9696
6471:auth_url = http:
//192
.168.56.11:35357
6472:auth_type = password
6473:project_domain_name = default
6474:user_domain_name = default
6475:region_name = RegionOne
6476:project_name = service
6477:username = neutron
6478:password = neutron
6479:service_metadata_proxy = True
6480:metadata_proxy_shared_secret = oldboy
6716:lock_path=
/var/lib/nova/tmp
6895:transport_url=rabbit:
//openstack
:openstack@192.168.56.11
8372:enabled=
true
8388:keymap=en-us
8395:vncserver_listen=0.0.0.0
8407:vncserver_proxyclient_address=192.168.56.12
8426:novncproxy_base_url=http:
//192
.168.56.11:6080
/vnc_auto
.html
|
2.3 虚拟机支持查看
1
2
|
[root@linux-node2 nova]
# egrep -c '(vmx|svm)' /proc/cpuinfo
2
|
如果不支持,则显示为0 ,可以修改配置文件
1
|
5672行
#virt_type=kvm
|
2.4 启动计算服务及其依赖,并将其配置为随系统自动启动
1
2
3
|
[root@linux-node2 nova]
# systemctl enable libvirtd.service openstack-nova-compute.service
Created
symlink
from
/etc/systemd/system/multi-user
.target.wants
/openstack-nova-compute
.service to
/usr/lib/systemd/system/openstack-nova-compute
.service.
[root@linux-node2 nova]
# systemctl start libvirtd.service openstack-nova-compute.service
|
2.5 检查启动状态
1
2
|
[root@linux-node2 nova]
# ps aux|grep nova
nova 20997 2.3 7.0 1661660 130900 ? Ssl 14:18 3:13
/usr/bin/python2
/usr/binnova-compute
|
在node1上检查状态
1
2
3
4
5
6
7
8
9
10
|
[root@linux-node1 ~]
# . admin-openstack
[root@linux-node1 ~]
# nova service-list
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| 3 | nova-consoleauth | linux-node1 | internal | enabled | up | 2017-01-02T13:41:11.000000 | - |
| 4 | nova-conductor | linux-node1 | internal | enabled | up | 2017-01-02T13:41:09.000000 | - |
| 5 | nova-scheduler | linux-node1 | internal | enabled | up | 2017-01-02T13:41:09.000000 | - |
| 6 | nova-compute | linux-node2 | nova | enabled | up | 2017-01-02T13:41:11.000000 | - |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
|
1
2
3
4
5
6
7
8
9
|
[root@linux-node1 ~]
# openstack compute service list
+----+------------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-------------+----------+---------+-------+----------------------------+
| 3 | nova-consoleauth | linux-node1 | internal | enabled | up | 2017-01-02T13:42:11.000000 |
| 4 | nova-conductor | linux-node1 | internal | enabled | up | 2017-01-02T13:42:09.000000 |
| 5 | nova-scheduler | linux-node1 | internal | enabled | up | 2017-01-02T13:42:09.000000 |
| 6 | nova-compute | linux-node2 | nova | enabled | up | 2017-01-02T13:42:11.000000 |
+----+------------------+-------------+----------+---------+-------+----------------------------+
|
本文转自 kesungang 51CTO博客,原文链接:http://blog.51cto.com/sgk2011/1888413,如需转载请自行联系原作者