如何借助OpenStack Kolla-K8S项目,通过K8S对OpenStack进行容器化部署?并最终部署一套All-In-One类型的OpenStack容器云?让我们继续部署:
部署kolla-kubernetes
■ 覆盖默认的RBAC设置
通过kubectl replace命令进行默认RBAC设置的覆盖,如下:
kubectl replace -f <(cat <<EOF
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
roleRef:
apiGroup:rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: Group
name: system:masters
– kind: Group
name: system:authenticated
– kind: Group
name: system:unauthenticated
EOF
)
■ 安装部署Helm
Helm是Kubernetes中的包管理器,类似yum包管理工具,yum用来安装RPM包,而Helm用来安装charts,这里的charts便类似RPM软件包。Helm分为客户端和服务器端,Helm的服务器端称为tiller,服务器端在Kubernetes中以Docker容器形式运行,为了便于Helm安装,可以实先将Tiller的容器镜像下载到本地,可使用如下命令下载:
docker pull warrior/kubernetes-helm:2.4.1
docker tag warrior/kubernetes-helm:2.4.1 gcr.io/kubernetes-helm/tiller:v2.4.1
安装Helm最简单的方式如下:
sudo curl -L https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get
> get_helm.sh
sudo chmod 700 get_helm.sh
sudo ./get_helm.sh
sudo helm init
Helm安装完成后,可以看到kube-system命名空间中新增了一个running状态的POD,名称为tiller-deploy-xxx;
Helm安装成功后,通过helm version即可看到客户端和服务器端的信息;
■ 安装kolla-ansible和kolla-kubernetes
Clone社区Kolla-ansible源代码,如下:
git clone http://github.com/openstack/kolla-ansible
git clone http://github.com/openstack/kolla-kubernetes
安装kolla-ansible和kolla-kubernets,如下:
sudo pip install -U kolla-ansible/ kolla-kubernetes/
复制默认的kolla配置文件到/etc目录,如下:
sudo cp -aR /usr/share/kolla-ansible/etc_examples/kolla /etc
复制 kolla-kubernetes 配置文件至/etc目录,如下:
sudo cp -aR kolla-kubernetes/etc/kolla-kubernetes /etc
为Openstack集群各个项目和用户生成密码文件,如下:
sudo kolla-kubernetes-genpwd
在kubernets中创建一个独立的密码空间kolla,如下:
kubectl create namespace kolla
将AIO节点标记为控制节点和计算节点,如下:
kubectl label node $(hostname) kolla_compute=true
kubectl label node $(hostname) kolla_controller=true
修改/etc/kolla/globals.yml配置文件,其中:network_interface和neutron_external_interface两个变量需要用户指定,network_interface是管理接口(如eth0),默认也是Openstack各个服务项目的API接口,neutron_external_interface是Neutron项目用于外网桥接的物理接口(如eth1),该接口上不要手工配置IP地址。
将需要启动的服务项目添加到/etc/kolla/globals.yml的末尾,如下:
cat <<EOF > add-to-globals.yml
kolla_install_type: “source”
tempest_image_alt_id: “{{ tempest_image_id }}”
tempest_flavor_ref_alt_id: “{{ tempest_flavor_ref_id }}”
neutron_plugin_agent: “openvswitch”
api_interface_address: 0.0.0.0
tunnel_interface_address: 0.0.0.0
orchestration_engine: KUBERNETES
memcached_servers: “memcached”
keystone_admin_url: “http://keystone-admin:35357/v3”
keystone_internal_url: “http://keystone-internal:5000/v3”
keystone_public_url: “http://keystone-public:5000/v3”
glance_registry_host: “glance-registry”
neutron_host: “neutron”
keystone_database_address: “mariadb”
glance_database_address: “mariadb”
nova_database_address: “mariadb”
nova_api_database_address: “mariadb”
neutron_database_address: “mariadb”
cinder_database_address: “mariadb”
ironic_database_address: “mariadb”
placement_database_address: “mariadb”
rabbitmq_servers: “rabbitmq”
openstack_logging_debug: “True”
enable_haproxy: “no”
enable_heat: “no”
enable_cinder: “yes”
enable_cinder_backend_lvm: “yes”
enable_cinder_backend_iscsi: “yes”
enable_cinder_backend_rbd: “no”
enable_ceph: “no”
enable_elasticsearch: “no”
enable_kibana: “no”
glance_backend_ceph: “no”
cinder_backend_ceph: “no”
nova_backend_ceph: “no”
EOF
cat ./add-to-globals.yml | sudo tee -a /etc/kolla/globals.yml
如果是在虚拟机上进行部署,则需要使用qemu虚拟化引擎,如下:
sudo mkdir /etc/kolla/config
sudo tee /etc/kolla/config/nova.conf<<EOF
[libvirt]
virt_type=qemu
cpu_mode=none
EOF
生成默认的Openstack各个项目配置文件,如下:
sudo kolla-ansible genconfig
为Openstack各个项目创建Kubernetes秘钥并将其注册到Kubernetes集群中,如下:
kolla-kubernetes/tools/secret-generator.py create
创建并注册kolla的config map,如下:
kollakube res create configmap mariadb keystone horizon rabbitmq memcached nova-api nova-conductor nova-scheduler glance-api-haproxy glance-registry-haproxy glance-api glance-registry neutron-server neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent neutron-openvswitch-agent openvswitch-db-server openvswitch-vswitchd nova-libvirt nova-compute nova-consoleauth nova-novncproxy nova-novncproxy-haproxy neutron-server-haproxy nova-api-haproxy cinder-api cinder-api-haproxy cinder-backup inder-scheduler cinder-volume iscsid tgtd keepalived placement-api placement-api-haproxy
启用resolv.conf解决方法,如下:
kolla-kubernetes/tools/setup-resolv-conf.sh kolla
编译Helm的microcharts、service charts和 metacharts,如下:
kolla-kubernetes/tools/helm_build_all.sh ./
编译过程会花费一定时间,编译完成会在当前目录上产生很多.tgz的文件,其数目至少要大于150个。
创建一个本地cloud.yaml文件,用于安装部署Helm的charts,如下:
global:
kolla:
all:
docker_registry:192.168.128.13:4000 //本地registry仓库地址
image_tag:”4.0.0″
kube_logger: false
external_vip:”192.168.128.13″
base_distro:”centos”
install_type: “source”
tunnel_interface:”ens34″ //管理接口
resolve_conf_net_host_workaround: true
keystone:
all:
admin_port_external:”true”
dns_name:”192.168.128.13″
public:
all:
port_external:”true”
rabbitmq:
all:
cookie: 67
glance:
api:
all:
port_external:”true”
cinder:
api:
all:
port_external:”true”
volume_lvm:
all:
element_name:cinder-volume
daemonset:
lvm_backends:
– ‘192.168.128.13’:’cinder-volumes’ //cinder后端VG名称
ironic:
conductor:
daemonset:
selector_key:”kolla_conductor”
nova:
placement_api:
all:
port_external: true
novncproxy:
all:
port: 6080
port_external: true
openvwswitch:
all:
add_port: true
ext_bridge_name:br-ex
ext_interface_name:ens41 //Neutron外网桥接网口
setup_bridge: true
horizon:
all:
port_external: true
cloud.yaml文件需要根据用户各自的环境进行修改,上述文件中的192.168.128.13是笔者管理网口ens34上的IP地址,在使用过程中需要进行相应的修改。
■ 使用Helm在Kubernetes上部署Openstack
首先部署MariaDB,并等待其POD进入running状态,如下:
helm install –debug kolla-kubernetes/helm/service/mariadb –namespace kolla –name mariadb –values ./cloud.yaml
待数据库稳定后,部署其他的Openstack服务,如下:
helm install –debug kolla-kubernetes/helm/service/rabbitmq –namespace kolla –name rabbitmq –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/memcached –namespace kolla –name memcached –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/keystone –namespace kolla –name keystone –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/glance –namespace kolla –name glance –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/cinder-control –namespace kolla –name cinder-control –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/horizon –namespace kolla –name horizon –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/openvswitch –namespace kolla –name openvswitch –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/neutron –namespace kolla –name neutron –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/nova-control –namespace kolla –name nova-control –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/nova-compute –namespace kolla –name nova-compute –values ./cloud.yaml
当nova-compute进入running状态后,创建cell0数据库,如下:
helm install –debug kolla-kubernetes/helm/microservice /nova-cell0-create-db-job –namespace kolla –name nova-cell0-create-db-job –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/microservice /nova-api-create-simple-cell-job –namespace kolla –name nova-api-create-simple-cell –values ./cloud.yaml
当上述全部POD进入running状态后,部署Cinder LVM。这里假设系统上已经有一个名为cinder-volumes的VG存在,如果还没有cinder-volume这个VG,则需要事先创建该VG,如下:
pvcreate /dev/sdb /dev/sdc
vgcreate cinder-volumes /dev/sdb /dev/sdc
安装部署cinder-volume,如下:
helm install –debug kolla-kubernetes/helm/service/cinder-volume-lvm –namespace kolla –name cinder-volume-lvm –values ./cloud.yaml
注意:如果要删除helm部署的charts,如cinder-volume-lvm,则通过命令:
helm delete cinder-volume-lvm –purge即可从kubernets集群中清除cinder-volume相关的PODs。
至此,全部Openstack服务已经部署完成,在操作Openstack集群之前,先等待所有Kubernetes集群中的PODs处于running状态;
查看kubernets集群中的全部deployment;
查看kubernets集群中的全部service;
可以看到,每个service都被自动分配了10.3.3.0/24网段的IP地址,并且可以看到各个service对应的端口。在确认Kubernetes的各个API对象正常运行后,便可通过Openstack命令行客户端进行Openstack集群操作。首先,生成并加载openrc文件,如下:
kolla-kubernetes/tools/build_local_admin_keystonerc.sh ext
source ~/keystonerc_admin
通过kolla-ansible提供的init-runonce脚本初始化Openstack,并launch一个VM,如下:
kolla-ansible/tools/init-runonce
创建一个FloatingIP地址,并将其添加到VM上,如下:
openstack server add floating ip demo1 $(openstack floating ip create public1 -f value -c floating_ip_address)
查看创建的VM;
登录Dashboard(http://192.168.128.13);
在dashboard上查看创建的实例;
创建一个块存储,并将其attach到实例demo1上;
到此,Ocata版本的Openstack已经成功部署在Kubernetes集群上。由于诸多原因,目前Kolla-kubernets项目仍然不具备生产环境部署条件,社区目前也仅支持AIO的开发实验性质的部署,相信随着K8S的兴趣,Kolla-kubernets项目的重视程度也会与日俱增,而且可以预言,在不久的将来,通过K8S部署Openstack容器云将会是Openstack的一大主流方向!
本文转移K8S技术社区-教程get | K8S部署OpenStack容器云(下)