一、ClickHouse简单介绍
什么是ClickHouse?ClickHouse是一个用于联机分析(OLAP)的列式数据库管理系统(DBMS)。
具体概念可以参考官方文档中的介绍:https://clickhouse.tech/docs/zh/
二、ClickHouse特点以及业务价值
三、CentOS7下搭建ClickHouse集群
1、基础环境介绍
三台CentOS7.9服务器(均可以访问互联网)
node1 192.168.31.121node2 192.168.31.122node3 192.168.31.123
2、利用脚本搭建zookeeper+clickhouse集群
具体操作步骤
cd /opt rz上传clickhouse.zip unzip clickhouse.zip #所在节点执行对应的安装脚本 #例如node1执行sh jdk_zookeeper_clickhouse_node1.sh sh jdk_zookeeper_clickhouse_node1.sh
jdk_zookeeper_clickhouse_node1.sh脚本内容如下
以node1为例
[root@node1 opt]# cat jdk_zookeeper_clickhouse_node1.sh #!/bin/bash #wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo #sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo #wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo #yum clean all #yum makecache echo "-----------基础环境配置----------------" setenforce 0 sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config systemctl disable firewalld systemctl stop firewalld cat >> /etc/security/limits.conf << EOF * soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072 EOF cat >> /etc/security/limits.d/90-nproc.conf << EOF * soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072 EOF cat >> /etc/hosts << EOF 192.168.31.121 node1 192.168.31.122 node2 192.168.31.123 node3 EOF cat > /etc/yum.repos.d/clickhouse.repo << EOF [repo.yandex.ru_clickhouse_rpm_stable_x86_64] name=clickhouse stable baseurl=https://mirrors.tuna.tsinghua.edu.cn/clickhouse/rpm/stable/x86_64 enabled=1 EOF echo "-----------安装JDK----------------" cd /opt sleep 5 rpm -ivh jdk-8u251-linux-x64.rpm echo "export JAVA_HOME=/usr/java/jdk1.8.0_251-amd64" >> /etc/profile echo "export PATH=\$PATH:\$JAVA_HOME/bin" >> /etc/profile echo "export CLASSPATH=.:\$JAVA_HOME/jre/lib:\$JAVA_HOME/lib:\$JAVA_HOME/lib/tools.jar" >> /etc/profile echo "-----------安装zookeeper----------------" cd /opt tar -zxf apache-zookeeper-3.6.2-bin.tar.gz mv apache-zookeeper-3.6.2-bin zookeeper cd zookeeper mkdir data cd conf cp zoo_sample.cfg zoo.cfg sed -i "s#dataDir=/tmp/zookeeper#dataDir=/opt/zookeeper/data#g" zoo.cfg echo "server.1=node1:2888:3888" >> zoo.cfg echo "server.2=node2:2888:3888" >> zoo.cfg echo "server.3=node3:2888:3888" >> zoo.cfg echo 1 > /opt/zookeeper/data/myid #echo 2 > /opt/zookeeper/data/myid #echo 3 > /opt/zookeeper/data/myid echo "export PATH=\$PATH:/opt/zookeeper/bin" >> /etc/profile . /etc/profile echo "-----------安装clickhouse---------------" sed -i "s/gpgcheck=1/gpgcheck=0/g" /etc/yum.conf yum install clickhouse-server clickhouse-client -y \cp -rf /opt/config_node1.xml /etc/clickhouse-server/config.xml #\cp -rf /opt/config_node2.xml /etc/clickhouse-server/config.xml #\cp -rf /opt/config_node3.xml /etc/clickhouse-server/config.xml \cp -rf /opt/users.xml /etc/clickhouse-server/ mkdir -p /opt/clickhouse chown -R clickhouse:clickhouse /opt/clickhouse echo "-----------启动zookeeper---------------" zkServer.sh start sleep 2 zkServer.sh status echo "-----------启动clickhouse---------------" systemctl enable clickhouse-server systemctl start clickhouse-server systemctl status clickhouse-server sleep 2 echo "-----------clickhouse-client连接---------------" clickhouse-client --user=ck --password=clickhouse2021 -m --host=node1 --port=9000 #clickhouse-client --user=ck --password=clickhouse2021 -m --host=node2 --port=9000 #clickhouse-client --user=ck --password=clickhouse2021 -m --host=node3 --port=9000
其中 config.xml中如下地方均做了修改 如下图所示
-- 1)、路径均修改为/opt/clickhouse目录
-- 2)、集群<remote_server>配置,<zookeeper>配置,以及 <macros>配置修改
其中<macros>配置不同节点配置不同,注意区分
当然我这里是三分片一副本的配置,可以根据自身实际环境进行修改
-- 3)、监听地址修改为0.0.0.0
<listen_host>0.0.0.0</listen_host>
-- 4)、users.xml中添加ck用户及密码等配置
3、集群状态验证
在node1上
zkServer.sh status systemctl status clickhouse-server clickhouse-client --user=ck --password=clickhouse2021 -m --host=node1 --port=9000 :)select * from system.clusters;
可以看到 cluster_clickhouse名称的集群信息说明集群搭建成功
clickhouse集群的简单搭建就介绍到这里
公众号后台回复clickhouse获取clickhouse.zip文件(安装脚本+相关依赖文件)及《ClickHouse知识讲解PPT.pptx》