dashboard必备的是nova neutron glance keystone其他的服务可以先安装,但是不能提前注册,
否则dashboard打不开,比如cinder提前注册了,dashboard启动后看到cinder注册的东西了,
就会连接cinder,但是cinder你还没配置好,所以dashboard就无法启动了。
herizon只需要连接到keystone就行了,herizion登录也是用的keystone认证的用户
安装:
yum install openstack-dashboard -y
配置:
vim /etc/openstack-dashboard/local_settings
29 ALLOWED_HOSTS = ['*',] #允许哪些主机连接
103 SECRET_KEY='30110465420bb59687ce' #默认的没有修改
108 CACHES = { #memcache缓存的配置
109 'default': {
110 'BACKEND': 'django.core.cache. backends.memcached. MemcachedCache',
111 'LOCATION': '192.168.56.11:11211',
112 }
113 }
138 OPENSTACK_HOST = "192.168.56.11" #keystone主机所在位置
140 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #使用keystone默认哪个角色
320 TIME_ZONE = "Asia/Shanghai" #时区设置
重启httpd服务
systemctl restart httpd
打开浏览器输入:http://192.168.56.11/dashboard/
使用demo用户demo密码登录
遇到的故障:dashboard无法启动,现象:重启httpd后,把keystone服务挤掉,注释掉/etc/httpd/conf.d/openstack-dashboard.conf 之后
再次启动httpd服务,keystone恢复正常
最后解决:把keystone安装在另外一台机器上就可以了,(注意修改连接memcache和keystone的主机IP)
1块存储:硬盘 LVS DAS(直连式存储)SAN存储(FC-SAN IP-SAN)
2文件存储: NFS nas
3对象存储:分布式存储(ceph PB级)
分布式存储:ceph
cinder:提供云硬盘,云硬盘的瓶颈为网络
组件介绍:
cinder-api:接受API请求并将请求路由到 cinder-volume 来执行。 (作用类似nova-api的)
cinder-volume:响应请求,读取或写向块存储数据库为维护状态,通过信息队列机制与其他进程交互,
或直接与上层块存储提供的硬件或软件进行交互。通过驱动结构,他可以与众多的存储提供者进行交互。
可以有多个。(作用类似nova-compute)
cinder-scheduler:守护进程,为存储卷的实例选取最优的块存储供应节点。(类似于nova-scheduler)
基于三个组件的作用划分,通常我们将
cinder-api和cinder-shceduler 安装在控制节点上,
把 cinder-volume安装在存储节点上。(本次实验安装在计算节点的一块单独的硬盘上)
控制节点上:
yum install openstack-cinder python-cinderclient -y
[root@linux-node1 ~]# yum install openstack-cinder python-cinderclient -y
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
2516 connection = mysql://cinder:cinder@192.168.56.11/cinder
同步数据库:
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
确认是否同步成功
[root@linux-node1 ~]# mysql -ucinder -pcinder -h 192.168.56.11 -e "use cinder;show tables;"
+----------------------------+
| Tables_in_cinder |
+----------------------------+
| backups |
| cgsnapshots |
| consistencygroups |
| driver_initiator_data |
| encryption |
| image_volume_cache_entries |
| iscsi_targets |
| migrate_version |
| quality_of_service_specs |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| services |
| snapshot_metadata |
| snapshots |
| transfers |
| volume_admin_metadata |
| volume_attachment |
| volume_glance_metadata |
| volume_metadata |
| volume_type_extra_specs |
| volume_type_projects |
| volume_types |
| volumes |
+----------------------------+
[root@linux-node1 ~]# openstack user create --domain default --password-prompt cinder
User Password:cinder
Repeat User Password:cinder
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b47cb3f352b0462fb2939fc5b536a1a2 |
| name | cinder |
+-----------+----------------------------------+
[root@linux-node1 ~]# openstack role add --project service --user cinder admin
[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
2294 rpc_backend = rabbit
2640 [keystone_authtoken]
2641 auth_uri = http://192.168.56.11:5000
2642 auth_url = http://192.168.56.11:35357
2643 auth_plugin = password
2644 project_domain_id = default
2645 user_domain_id = default
2646 project_name = service
2647 username = cinder
2648 password = cinder
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
[root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
创建服务:
[root@linux-node1 ~]# openstack service create --name cinder \
> --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | b069f3ddf10849729a7e24ba9598b16e |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack service create --name cinderv2 \
> --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | f54356c5dfad4d6db666a1e0361e19cd |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | 16c0b5acb8f4471ea2b81a3a34c8c337 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b069f3ddf10849729a7e24ba9598b16e |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | 9e07046c6464478391c3c741529194e8 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b069f3ddf10849729a7e24ba9598b16e |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | d7c455a939fc4b0d975ab2ba7745f397 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b069f3ddf10849729a7e24ba9598b16e |
| service_name | cinder |
| service_type | volume |
| url | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | 534741eb6c2040679f1638d14f7907cb |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f54356c5dfad4d6db666a1e0361e19cd |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | bf6f8b54a999495e8e912f8722b03081 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f54356c5dfad4d6db666a1e0361e19cd |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
> volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| enabled | True |
| id | d5d8c3b07f77441980a39d961bef6ad7 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f54356c5dfad4d6db666a1e0361e19cd |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# grep -vnE "^#|^$" /etc/cinder/cinder.conf
1:[DEFAULT]
421:glance_host = 192.168.56.11
536:auth_strategy = keystone
2294:rpc_backend = rabbit
2371:[BRCD_FABRIC_EXAMPLE]
2404:[CISCO_FABRIC_EXAMPLE]
2437:[cors]
2465:[cors.subdomain]
2493:[database]
2516:connection = mysql://cinder:cinder@192.168.56.11/cinder
2593:[fc-zone-manager]
2621:[keymgr]
2640:[keystone_authtoken]
2641:auth_uri = http://192.168.56.11:5000
2642:auth_url = http://192.168.56.11:35357
2643:auth_plugin = password
2644:project_domain_id = default
2645:user_domain_id = default
2646:project_name = service
2647:username = cinder
2648:password = cinder
2811:[matchmaker_redis]
2840:[matchmaker_ring]
2859:[oslo_concurrency]
2874:lock_path = /var/lib/cinder/tmp
2877:[oslo_messaging_amqp]
2976:[oslo_messaging_qpid]
3119:[oslo_messaging_rabbit]
3173:rabbit_host = 192.168.56.11
3177:rabbit_port = 5672
3189:rabbit_userid = openstack
3193:rabbit_password = openstack
3348:[oslo_middleware]
3369:[oslo_policy]
3394:[oslo_reports]
3404:[profiler]
存储节点:
添加一块50G的硬盘
[root@linux-node2 ~]# fdisk -l
Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bbf7d
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 35653631 16777216 82 Linux swap / Solaris
/dev/sda3 35653632 62914559 13630464 83 Linux
[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
devices { ... filter = [ "a/sdb/", "r/.*/"]
[root@linux-node2 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
[root@linux-node2 ~]# yum install openstack-cinder targetcli python-oslo-policy
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf root@192.168.56.12:/etc/cinder/cinder.conf
root@192.168.56.12's password:
[root@linux-node2 ~]# grep -vnE "^#|^$" /etc/cinder/cinder.conf
1:[DEFAULT]
421:glance_host = 192.168.56.11
536:auth_strategy = keystone
540:enabled_backends = lvm
2294:rpc_backend = rabbit
2371:[BRCD_FABRIC_EXAMPLE]
2404:[CISCO_FABRIC_EXAMPLE]
2437:[cors]
2465:[cors.subdomain]
2493:[database]
2516:connection = mysql://cinder:cinder@192.168.56.11/cinder
2593:[fc-zone-manager]
2621:[keymgr]
2640:[keystone_authtoken]
2641:auth_uri = http://192.168.56.11:5000
2642:auth_url = http://192.168.56.11:35357
2643:auth_plugin = password
2644:project_domain_id = default
2645:user_domain_id = default
2646:project_name = service
2647:username = cinder
2648:password = cinder
2811:[matchmaker_redis]
2840:[matchmaker_ring]
2859:[oslo_concurrency]
2874:lock_path = /var/lib/cinder/tmp
2877:[oslo_messaging_amqp]
2976:[oslo_messaging_qpid]
3119:[oslo_messaging_rabbit]
3173:rabbit_host = 192.168.56.11
3177:rabbit_port = 5672
3189:rabbit_userid = openstack
3193:rabbit_password = openstack
3348:[oslo_middleware]
3369:[oslo_policy]
3394:[oslo_reports]
3404:[profiler]
3414:[lvm]
3415:volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
3416:volume_group = cinder-volumes
3417:iscsi_protocol = iscsi
3418:iscsi_helper = lioadm
[root@linux-node2 ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@linux-node2 ~]# systemctl start openstack-cinder-volume.service target.service