目录
- 4.1 数据库操作
- 4.2 配置用户
- 4.3 创建服务端点
- 4.4 安装placement
- 4.5 placement配置文件
- 4.6 同步数据库
- 4.7 重启httpd服务
- 4.8 验证服务
- 4.9 注意坑
- 7.1 数据库操作
- 7.2 创建用户
- 7.3 创建服务端点
- 7.4 安装Self-service networks
- 7.5 编写配置文件 neutron.conf
- 7.6 编写配置文件 ml2_conf.ini
- 7.7 编写linuxbridge_agent.ini
- 7.8 打开桥接
- 7.9 编写 l3_agent.ini
- 7.10 编写dhcp_agent.ini
- 7.11 编写metadata_agent.ini
- 7.12 配置nova使用网络服务
- 7.13 完成安装
- 7.14 启动服务
- 7.15 验证服务
Centos7搭建OpenStack T版本 --上
YUM源替换
由于Centos7已经不再维护,所以YUM源也需要换,我这里直接给出安装openstack所需的额外源,自己再配置一个基础的源就可以继续部署了
[root@localhost ~]# vi /etc/yum.repos.d/CentOS_ALL.repo [centos-ceph-nautilus] name=CentOS-$releasever - Ceph Nautilus baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.9.2009/storage/x86_64/ceph-nautilus/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage [centos-nfs-ganesha28] name=CentOS-$releasever - NFS Ganesha 2.8 baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.9.2009/storage/x86_64/nfsganesha-28/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage [centos-qemu-ev] name=CentOS-$releasever - QEMU EV baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.9.2009/virt/x86_64/kvm-common/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization [centos-storage-debuginfo] name=CentOS-$releasever - Storage SIG - debuginfo baseurl=http://debuginfo.centos.org/$contentdir/$releasever/storage/$basearch/ gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
1. 环境准备(所有节点操作)
主机名 | IP |
controller | 192.168.100.100(仅主机) 192.168.200.X (NAT模式DHCP分配) |
compute | 192.168.100.200(仅主机) 192.168.200.X (NAT模式DHCP分配) |
如果需要搭建存储节点则添加机器并配置网络即可,我的所有密码设置为123
1.1 修改主机名
所有节点都需要操作,且操作方法一样,只写了controller一个节点的结果,其他节点照常执行
IP地址自行配置
[root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash
1.2 关闭selinux 以及防火墙
将默认的enforcing改为disabled
[root@controller ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled [root@controller ~]# setenforce 0 [root@controller ~]# systemctl disable --now firewalld
1.3 修改hosts
[root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 加上controller和compute,如果有其他节点也写进来 192.168.100.100 controller 192.168.100.110 compute
1.4 配置时间同步
controller 操作
[root@controller ~]# yum install chrony -y [root@controller ~]# vim /etc/chrony.conf # controller节点需要改这三个地方 server ntp.aliyun.com iburst # 中间的ntp服务器可自己改,能同步就行 allow 192.168.100.0/24 #允许192.168.100.0/24 这个网段内的主机与这台服务器同步 local stratum 10 [root@controller ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 203.107.6.88 2 6 17 2 +319us[ +654us] +/- 23ms
compute以及其他节点操作
[root@compute ~]# yum install chrony -y [root@compute ~]# vim /etc/chrony.conf # 只需要改动一处地方 server controller iburst [root@compute ~]# systemctl restart chronyd [root@compute ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 6 7 1 +6342ns[ -921us] +/- 29ms
1.5 配置OpenStack 软件包
[root@controller ~]# yum install centos-release-openstack-train -y [root@controller ~]# yum install python2-openstackclient -y
这里安装完之后切记看一眼yum源,将多出来的Centos-xxx的文件都删掉,留下Centos_ALL.repo就好了,因为那些文件里面的源都已经过期了,用不了
1.6 安装数据库
从这里开始只需要在controller节点上操作,其他节点不需要操作
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL [root@controller ~]# vim /etc/my.cnf.d/openstack.cnf [mysqld] # 官网有这一条配置,我没有打开,打开之后不方便排错 # bind-address = 192.168.100.100 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 [root@controller ~]# systemctl enable --now mariadb
数据库初始化
[root@controller ~]# mysql_secure_installation Enter current password for root (enter for none): # 直接按回车 Change the root password? [Y/n] # 是否修改root密码,按Y然后输入密码 Remove anonymous users? [Y/n] # 移除匿名用户,建议移除 Disallow root login remotely? [Y/n] # 是否禁止root远程登录,建议开启,按n Remove test database and access to it? [Y/n] # 移除测试数据库,建议按Y Reload privilege tables now? [Y/n] # 重新加载权限,按Y
1.7 安装消息队列
[root@controller ~]# yum install rabbitmq-server -y [root@controller ~]# systemctl enable rabbitmq-server.service --now # 这个地方RABBIT_PASS 设置为你自己的密码,我用的123 # rabbitmqctl add_user openstack RABBIT_PASS [root@controller ~]# rabbitmqctl add_user openstack 123 [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
1.8 安装memcached
[root@controller ~]# yum install memcached python-memcached -y [root@controller ~]# vim /etc/sysconfig/memcached # 修改这一行,加上controller OPTIONS="-l 127.0.0.1,::1,controller" [root@controller ~]# systemctl enable memcached.service --now
1.9 安装etcd
[root@controller ~]# yum install etcd -y # 直接清空原先的配置,使用这些,注意将192.178.100.100 改为你自己的controller的IP #[Member] ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.100.100:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.100.100:2379" ETCD_NAME="controller" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.100:2380" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.100:2379" ETCD_INITIAL_CLUSTER="controller=http://192.168.100.100:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" ETCD_INITIAL_CLUSTER_STATE="new" [root@controller ~]# systemctl enable --now etcd
2. 安装Keystone
2.1 数据库操作
我的所有密码为123,之后不写注释了,你需要修改密码的话将123改为你想设的密码就好
[root@controller ~]# mysql -uroot -p123 # -p 后面跟上你的数据库root密码,如果不想这样直接显示登录的话可以直接-p 回车 MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY '123'; MariaDB [(none)]> glush privileges;
2.2 安装软件包
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y [root@controller ~]# vim /etc/keystone/keystone.conf [database] # 自己注意替换密码,123是你要替换的地方 connection = mysql+pymysql://keystone:123@controller/keystone [token] provider = fernet [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone [root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone # 命令比较长,注意自行替换123 [root@controller ~]# keystone-manage bootstrap --bootstrap-password 123 --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne [root@controller ~]# vim /etc/httpd/conf/httpd.conf ServerName controller [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ [root@controller ~]# systemctl enable --now httpd
2.3 编写admin rc文件
[root@controller ~]# vim admin-login.sh export OS_USERNAME=admin export OS_PASSWORD=123 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
2.4 测试keystone是否正常工作
2.4.1 创建domain
[root@controller ~]# openstack domain create --description "An Example Domain" example +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | An Example Domain | | enabled | True | | id | 2f4f80574fd84fe6ba9067228ae0a50c | | name | example | | tags | [] | +-------------+----------------------------------+
2.4.2 创建project
[root@controller ~]# openstack project create --domain default \ --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | 24ac7f19cd944f4cba1d77469b2a73ed | | is_domain | False | | name | service | | parent_id | default | | tags | [] | +-------------+----------------------------------+
2.4.3 最后测试
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD [root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issue Password: # 这个地方输入admin密码 +------------+-----------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------+ | expires | 2016-02-12T20:14:07.056119Z | | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | | project_id | 343d245e850143a096806dfaefa9afdc | | user_id | ac3377633149401296f6c0d92d79dc16 | +------------+-----------------------------------------------------------------+
3. 安装glance
3.1 数据库操作
[root@controller ~]# mysql -u root -p123 MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY '123';
3.2 创建 glance用户
[root@controller ~]# openstack user create --domain default --password-prompt glance User Password: # 输入两次密码 Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 3f4e777c4062483ab8d9edd7dff829df | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user glance admin [root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
3.3 给glance创建服务端点
[root@controller ~]# openstack endpoint create --region RegionOne \ image public http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne \ image internal http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne \ image admin http://controller:9292
3.4 安装配置glance
[root@controller ~]# yum install openstack-glance -y
3.4.1 glance-api 配置文件
[root@controller ~]# vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:123@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = 123 [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
3.5 同步数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
3.6 启动服务
[root@controller ~]# systemctl enable openstack-glance-api.service --now
3.7 验证服务
[root@controller ~]# source admin-login.sh [root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img [root@controller ~]# glance image-create --name "cirros" \ --file cirros-0.4.0-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --visibility public [root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 2a2ff041-0696-47a6-893b-b35d529b743d | cirros | active | +--------------------------------------+--------+--------+ # 输出这个就代表没错
4. 安装placement
4.1 数据库操作
[root@controller ~]# mysql -u root -p123 MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123';
4.2 配置用户
[root@controller ~]# openstack user create --domain default --password-prompt placement [root@controller ~]# openstack role add --project service --user placement admin [root@controller ~]# openstack service create --name placement \ --description "Placement API" placement
4.3 创建服务端点
[root@controller ~]# openstack endpoint create --region RegionOne \ placement public http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne \ placement admin http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne \ placement internal http://controller:8778
4.4 安装placement
[root@controller ~]# yum install openstack-placement-api -y
4.5 placement配置文件
[placement_database] connection = mysql+pymysql://placement:123@controller/placement [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = 123
4.6 同步数据库
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement
4.7 重启httpd服务
[root@controller ~]# systemctl restart httpd
4.8 验证服务
[root@controller ~]# placement-status upgrade check +----------------------------------+ | Upgrade Check Results | +----------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------+
4.9 注意坑
官网有一个配置是默认没有写的,如果不写的话后期nova有一条命令是会报错的,并且也无法创建虚拟机,会报错500,找不到合适的节点,所以这个得开
[root@controller ~]# vim /etc/httpd/conf.d/00-placement-api.conf # 将这些配置放在这个文件里 <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
5. 安装nova
5.1 数据库操作
[root@controller ~]# mysql -u root -p123 MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY '123';
5.2 创建用户
[root@controller ~]# openstack user create --domain default --password-prompt nova User Password: Repeat User Password: [root@controller ~]# openstack role add --project service --user nova admin [root@controller ~]# openstack service create --name nova \ --description "OpenStack Compute" compute
5.3 创建服务端点
[root@controller ~]# openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1
5.4 安装软件包
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
5.5 编辑配置文件 nova.conf
[root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] my_ip = 192.168.100.100 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:123@controller:5672/ enabled_apis = osapi_compute,metadata [api_database] connection = mysql+pymysql://nova:123@controller/nova_api [database] connection = mysql+pymysql://nova:123@controller/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 123 [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 123
5.6 同步数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0?charset=utf8 | False | | cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack:****@controller:5672/nova_cell1 | mysql+pymysql://nova:****@controller/nova_cell1?charset=utf8 | False | +-------+--------------------------------------+----------------------------------------------------+--------------------------------------------------------------+----------+
5.7 启动服务
[root@controller ~]# systemctl enable \ > openstack-nova-api.service \ > openstack-nova-scheduler.service \ > openstack-nova-conductor.service \ > openstack-nova-novncproxy.service --now
6. 安装nova-compute 在计算节点操作
6.1 安装软件包
[root@compute ~]# yum install openstack-nova-compute -y
6.2 编辑配置文件
[root@compute ~]# vim /etc/nova/nova.conf [DEFAULT] my_ip = 192.168.100.110 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:123@controller [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 123 [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 123 [libvirt] virt_type = qemu
6.3 检查计算节点是否支持硬件虚拟化
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo 此处的回显应该是1或大于1的数字,如果是0的话,检查一下有没有开启虚拟化
vmware workstation 这样操作,需要点开 虚拟机设置--> CPU --> 勾选
6.4 启动服务
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service --now
6.5 将计算节点添加到数据库,controller节点执行
[root@controller ~]# openstack compute service list --service nova-compute [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
6.6 验证服务
[root@controller ~]# openstack compute service list [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2024-05-19T10:53:30.000000 | | 2 | nova-scheduler | controller | internal | enabled | up | 2024-05-19T10:53:33.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2024-05-19T10:53:34.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ [root@controller ~]# nova-status upgrade check 如果执行这个命令报错403的话,就去前面加上placement的一些配置即可解决
7. 安装Neutron
7.1 数据库操作
[root@controller ~]# mysql -u root -p123 MariaDB [(none)] CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY '123'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY '123';
7.2 创建用户
[root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: [root@controller ~]# openstack role add --project service --user neutron admin
7.3 创建服务端点
[root@controller ~]# openstack service create --name neutron \ --description "OpenStack Networking" network [root@controller ~]# openstack endpoint create --region RegionOne \ network public http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne \ network internal http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne \ network admin http://controller:9696
7.4 安装Self-service networks
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables -y
7.5 编写配置文件 neutron.conf
[root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:123@controller auth_strategy = keystone [database] connection = mysql+pymysql://neutron:123@controller/neutron [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123 [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 123 [oslo_concurrency] lock_path = /var/lib/neutron/tmp
7.6 编写配置文件 ml2_conf.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population type_drivers = flat,vlan,vxlan extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true
7.7 编写linuxbridge_agent.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens34 # 这里的ens34你改成你自己使用NAT的那一张网卡 [vxlan] enable_vxlan = true # IP改成自己的 local_ip = 192.168.100.100 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
7.8 打开桥接
[root@controller ~]# modprobe br_netfilter [root@controller ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
7.9 编写 l3_agent.ini
[root@controller ~]# vim /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge
7.10 编写dhcp_agent.ini
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
7.11 编写metadata_agent.ini
[DEFAULT] nova_metadata_host = controller # 这个地方的123可以自定义,等会需要用到,随便写啥都行 metadata_proxy_shared_secret = 123
7.12 配置nova使用网络服务
[root@controller ~]# vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 123 service_metadata_proxy = true # 这里的123 就需要跟刚刚那个地方定义的一样 metadata_proxy_shared_secret = 123
7.13 完成安装
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
7.14 启动服务
[root@controller ~]#systemctl restart openstack-nova-api.service [root@controller ~]#systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service --now
7.15 验证服务
[root@controller neutron]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 121cc747-3516-446e-bb6f-c6e95af3a000 | Metadata agent | localhost | None | :-) | UP | neutron-metadata-agent | | 17059e4f-c61f-4e8c-87ff-35ced7764543 | Linux bridge agent | localhost | None | :-) | UP | neutron-linuxbridge-agent | | 32fded8a-dc80-4316-9771-42055979b0b8 | L3 agent | localhost | nova | :-) | UP | neutron-l3-agent | | ae4ca75a-153b-4bf4-a284-8db4d338d757 | DHCP agent | localhost | nova | :-) | UP | neutron-dhcp-agent | | e930286f-f99c-4f35-b8c0-0d5e83e35bf8 | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
8. 安装dashboard
8.1 安装软件包
[root@controller ~]# yum install openstack-dashboard -y
8.2 修改配置文件 local_settings
[root@controller ~]# vim /etc/openstack-dashboard/local_settings # 这里面有些配置项是本来就存在的,直接修改即可,不存在的直接添加 OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 这里面如果你安装的neutron是 provider类型的禁用第三项,其他的不变 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': True, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } TIME_ZONE = "Asia/Shanghai" # 这一行得加上 WEBROOT='/dashboard'
8.3 编辑openstack-dashboard.conf
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf # 加上这一行 WSGIApplicationGroup %{GLOBAL}
8.4 重启服务
[root@controller ~]# systemctl restart httpd.service memcached.service
到这里openstack的基础组件就安装完了,如果有其他需求可以根据官方文档来继续安装其他组件
9. 验证
9.1 登录dashboard
9.2 创建网络,子网
9.3 创建实例类型
9.4 创建虚拟机
本文来自博客园,作者:FuShudi,转载请注明原文链接:https://www.cnblogs.com/fsdstudy/p/18200540