1、环境准备
1.1、配置时间同步
centos7开始使用chrony进行始终同步,安装chrony并配置始终同步,设置为开机启动
yum -y install chrony #系统默认已经安装,如未安装,请执行以下命令安装
1.2、配置主机名、映射
- 设置主机名:
[root@cdh1 ~]hostnamectl set-hostname hdp1 [root@cdh2 ~]hostnamectl set-hostname hdp3 [root@cdh3 ~]hostnamectl set-hostname hdp5
- 设置主机映射:
vi /etc/hosts,添加以下内容【每台都需要】 10.10.101.1 hdp1 10.10.101.3 hdp3 10.10.101.5 hdp5
1.3、取消大叶内存
注意:【所有节点都需要】
sysctl -w vm.swappiness=0echo"vm.swappiness=0" >>/etc/sysctl.conf echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled
1.4、数据库驱动配置
注意:【所有节点都需要】
第一步:重命名Mysql驱动包,把版本号去掉
mv mysql-connector-java-5.1.44-bin.jar mysql-connector-java.jar
第二步:将jar包移到java共享目录下
mv mysql-connector-java.jar /usr/share/java/
第三步:将Mysql驱动包分发到另外两台服务器
scp mysql-connector-java.jar admin@10.10.101.3:/home/admin/ scp mysql-connector-java.jar admin@10.10.101.5:/home/admin/ sudomkdir /usr/share/java/ sudocp /home/admin/mysql-connector-java.jar /usr/share/java/
1.5、安装JDK【/etc/profile】
注意:要设置java环境变量,每个机器都要配置
从10.10.101.4上scp目录:/usr/local/jdk1.8.0_112,可以自行下载解压
- cd /usr/local/
- 设置软连接:sudo ln -s jdk1.8.0_112 jdk
- 设置环境变量:
- sudo vi /etc/profile
- export JAVA_HOME=/usr/local/jdk
- export PATH=$PATH:$JAVA_HOME/bin
- source /etc/profile
1.6、免密
切换root:sudo -i -uroot
查看密钥:cat ~/.ssh/id_rsa
生成密钥:ssh-keygen
ssh-copy-id hdp1
ssh-copy-id -i
1.7、关闭防火墙
查看防火墙状态:firewall-cmd -state
停止firewall :
systemctl stop firewalld.service
禁止firewall开机启动:
systemctl disable firewalld.service
1.8、关闭selinux
- 获取当前selinux状态:getenforce
- Enforcing为开启
- Disabled为关闭
- sudo vim /etc/sysconfig/selinux
- 替换:SELINUX=disabled
- 重启:reboot
1.9、设置unlimit参数
注意:【所有节点】
官方建议大于等于10000
cd /etc/security/limits.d/ sudovi hadoop.conf
- 输入以下内容
* soft noproc 65535* hard noproc 65535* soft nofile 65535* hard nofile 65535
- 退出当前会话,重新登录生效
2、制作本地yum源
使用【10.10.101.251】,我的实验环境服务器,需替换。
2.1、安装包准备
- http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.2.2/ambari-2.6.2.2-centos7.tar.gz
- http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.0/HDP-2.6.5.0-centos7-rpm.tar.gz
- http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.0/HDP-GPL-2.6.5.0-centos7-gpl.tar.gz
2.2、yum文件准备
下载方式:
- http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.2.2/ambari.repo
- http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.0/hdp.repo
- http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.0/hdp.gpl.repo
本地方式:
在各自的安装包中存在。比如:ambari中:ambari/centos7/2.6.2.2-1/ambari.repo
2.3、安装http
yum install httpd systemctl start httpd systemctl enable httpd
2.4、制作源
mkdir /var/www/html/ambari/HDP -Pmkdir /var/www/html/ambari/HDP-UTILS cd
2.5、编辑yum文件
cd /etc/yum.repos.d sudovi ambari.repo #VERSION_NUMBER=2.6.2.2-1[ambari-2.6.2.2] name=ambari Version - ambari-2.6.2.2 baseurl=http://10.10.101.251/hdp/ambari/centos7/2.6.2.2-1/ gpgcheck=1gpgkey=http://10.10.101.251/hdp/ambari/centos7/2.6.2.2-1/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1priority=1
sudovi hdp.repo #VERSION_NUMBER=2.6.5.0-292[HDP-2.6.5.0] name=HDP Version - HDP-2.6.5.0 baseurl=http://10.10.101.251/hdp/HDP/centos7/2.6.5.0-292/ gpgcheck=1gpgkey=http://10.10.101.251/hdp/HDP/centos7/2.6.5.0-292/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1priority=1[HDP-UTILS-1.1.0.22] name=HDP-UTILS Version - HDP-UTILS-1.1.0.22 baseurl=http://10.10.101.251/hdp/HDP-UTILS/centos7/1.1.0.22/ gpgcheck=1gpgkey=http://10.10.101.251/hdp/HDP-UTILS/centos7/1.1.0.22/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1priority=1
sudovi hdp.gpl.repo #VERSION_NUMBER=2.6.5.0-292[HDP-GPL-2.6.5.0] name=HDP-GPL Version - HDP-GPL-2.6.5.0 baseurl=http://10.10.101.251/hdp/HDP-GPL/centos7/2.6.5.0-292/ gpgcheck=1gpgkey=http://10.10.101.251/hdp/HDP-GPL/centos7/2.6.5.0-292/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1priority=1
3、数据库准备
此处不再赘述,实验环境使用存在的数据库:ambari_dianxin
- 地址:10.10.101.10
- User:ambari_dianxin
- Pwd:ambari_dianxin
- 进入数据库:mysql -uambari_dianxin -pambari_dianxin
4、安装ambari-server
- 下载:
sudo yum install -y ambari-server
- 设置:
sudo ambari-server setup [admin@hdp1 yum.repos.d]$ sudo ambari-server reset Using python /usr/bin/python Resetting ambari-server **** WARNING **** You are about to reset and clear the Ambari Server database. This will remove all cluster host and configuration information from the database. You will be required to re-configure the Ambari server and re-run the cluster wizard. Are you SURE you want to perform the reset [yes/no] (no)? yesERROR: Exiting with exit code 1. REASON: Ambari doesn't support resetting exernal DB automatically. To reset Ambari Server schema you must first drop and then create it using DDL scripts from "/var/lib/ambari-server/resources/"[admin@hdp1 yum.repos.d]$ sudo ambari-server setupUsing python /usr/bin/pythonSetup ambari-serverChecking SELinux...SELinux status is 'disabled'Ambari-server daemon is configured to run under user 'ambari'. Change this setting [y/n] (n)?Adjusting ambari-server permissions and ownership...Checking firewall status...WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.OK to continue [y/n] (y)?Checking JDK...Do you want to change Oracle JDK [y/n] (n)? y[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8[2] Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7[3] Custom JDK==============================================================================Enter choice (1): 3WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.Path to JAVA_HOME: /usr/local/jdkValidating JDK on Ambari Server...done.Checking GPL software agreement...Completing setup...Configuring database...Enter advanced database configuration [y/n] (n)? yConfiguring database...==============================================================================Choose one of the following options:[1] - PostgreSQL (Embedded)[2] - Oracle[3] - MySQL / MariaDB[4] - PostgreSQL[5] - Microsoft SQL Server (Tech Preview)[6] - SQL Anywhere[7] - BDB==============================================================================Enter choice (3):Hostname (10.10.101.10):Port (3309):Database name (ambari_dianxin):Username (ambari_dianxin):Enter Database Password (ambari_dianxin):Configuring ambari database...Configuring remote database connection properties...WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sqlProceed with configuring remote database connection properties [y/n] (y)? yExtracting system views...............Adjusting ambari-server permissions and ownership...Ambari Server 'setup' completed successfully.
- 执行sql:由于sql文件与数据库不在一个节点,所以需要转发过去。
scp /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql admin@10.10.101.10:/home/admin/
- 执行ambari的建表语句:
source /home/admin/Ambari-DDL-MySQL-CREATE.sql
- 启动服务:
ambari-server start
- 成功提示:Ambari Server 'start' completed successfully
- 在界面验证:http://10.10.1.1:8080账户:admin,密码:admin
- 失败查看日志:/var/log/ambari-server/ambari-server.log
5、搭建集群
登录Ambari界面:http://10.10.1.1:8080
输入地址:(IP是我的实验环境服务器,需替换)
http://10.10.101.251/hdp/HDP/centos7/2.6.5.0-292/
http://10.10.101.251/hdp/HDP-UTILS/centos7/1.1.0.22/
查看密钥:
cat ~/.ssh/id_rsa
然后输入到文本框
如果这一步出现如下错误,通过以下方式解决,
错误:
ERROR 2018-05-30 00:12:25,280 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-05-30 00:12:25,280 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
解决方式:
1.修改/etc/python/cert-verification.cfg配置文件: # vim /etc/python/cert-verification.cfg[https] verify=disable 2、编辑 /etc/ambari-agent/conf/ambari-agent.ini 配置文件,在 [security] 节部分,确保设置如下两个值,其它值保持不变: [root@ambari ~]# vi /etc/ambari-agent/conf/ambari-agent.ini [security] ssl_verify_cert=0force_https_protocol=PROTOCOL_TLSv1_2 保存退出,重启 ambari-agent: [root@ambari ~]# ambari-agent restart
若还是不能注册ambari-agent,使用下面介绍的方式。
如果上面方式还不能注册,说明jdk版本不对,使用默认oracle啊jdk,(2.6.2.2版本ambari需要使用java version "1.8.0_112")
如果一个节点重启后agent无法连接的话解决方法如下
ambari管理大数据集群,节点失去心跳,操作方法: 1、systemctl stop ambari-agent 2、在失去心跳节点打开配置 vi /etc/ambari-agent/conf/ambari-agent.ini 在[security] 下添加 force_https_protocol=PROTOCOL_TLSv1_2 3、关闭状态 vi /etc/python/cert-verification.cfg 如下: [https] verify=disable 4、systemctl stop ambari-agent
5.1、安装HDP
选择服务,全部为默认配置【除开密码配置以及数据库配置】
如果测试的时候报错,则设置mysql驱动:
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
5.2、设置oozie
等待安装即可。
6、卸载Ambari
停服务
sudo ambari-agent stopsudo ambari-server stop
卸载所有组件:
sudo yum remove -y hadoop_2* hdp-select* ranger_2* zookeeper* bigtop*atlas-metadata* ambari* spark* slide* strom* hive* oozie_2*
如有遗漏自行添加
查看ambari是否被卸载
rpm -qa|grep ambari rpm -e ambari-server-2.6.2.2-1.x86_64
删除目录:
sudorm-rf /var/lib/ambari* sudorm-rf /usr/lib/python2.6/site-packages/ambari_* sudorm-rf /usr/lib/python2.6/site-packages/resource_management sudorm-rf /usr/lib/ambari-* sudorm-rf /etc/ambari-* sudorm-rf /etc/hadoop sudorm-rf /etc/hbase sudorm-rf /etc/hive sudorm-rf /etc/hive2 sudorm-rf /etc/oozie sudorm-rf /etc/sqoop sudorm-rf /etc/zookeeper sudorm-rf /etc/flume sudorm-rf /etc/storm sudorm-rf /etc/tez_hive2 sudorm-rf /etc/spark2 sudorm-rf /etc/phoenix sudorm-rf /etc/pig sudorm-rf /etc/hive-hcatalog sudorm-rf /etc/tez sudorm-rf /etc/falcon sudorm-rf /etc/knox sudorm-rf /etc/hive-webhcat sudorm-rf /etc/kafka sudorm-rf /etc/slider sudorm-rf /etc/storm-slider-client sudorm-rf /etc/spark sudorm-rf /var/run/spark sudorm-rf /var/run/hadoop sudorm-rf /var/run/hbase sudorm-rf /var/run/zookeeper sudorm-rf /var/run/flume sudorm-rf /var/run/storm sudorm-rf /var/run/webhcat sudorm-rf /var/run/hadoop-yarn sudorm-rf /var/run/hadoop-mapreduce sudorm-rf /var/run/kafka sudorm-rf /var/run/hive sudorm-rf /var/run/oozie sudorm-rf /var/run/sqoop sudorm-rf /var/run/hive-hcatalog sudorm-rf /var/run/falcon sudorm-rf /var/run/hadoop-hdfs sudorm-rf /var/run/ambari-metrics-collector sudorm-rf /var/run/ambari-metrics-monitor sudorm-rf /var/log/hadoop-hdfs sudorm-rf /var/log/hive-hcatalog sudorm-rf /var/log/ambari-metrics-monitor sudorm-rf /var/log/hadoop sudorm-rf /var/log/hbase sudorm-rf /var/log/flume sudorm-rf /var/log/sqoop sudorm-rf /var/log/ambari-server sudorm-rf /var/log/ambari-agent sudorm-rf /var/log/storm sudorm-rf /var/log/hadoop-yarn sudorm-rf /var/log/hadoop-mapreduce sudorm-rf /var/log/knox sudorm-rf /var/lib/slider sudorm-rf /var/lib/pgsql/ sudorm-rf /usr/lib/flume sudorm-rf /usr/lib/storm sudorm-rf /var/lib/hive sudorm-rf /var/lib/oozie sudorm-rf /var/lib/flume sudorm-rf /var/lib/hadoop-yarn sudorm-rf /var/lib/hadoop-mapreduce sudorm-rf /var/lib/hadoop-hdfs sudorm-rf /var/lib/zookeeper sudorm-rf /var/lib/knox sudorm-rf /var/log/hive sudorm-rf /var/log/oozie sudorm-rf /var/log/zookeeper sudorm-rf /var/log/falcon sudorm-rf /var/log/webhcat sudorm-rf /var/log/spark sudorm-rf /var/tmp/oozie sudorm-rf /tmp/ambari-qa sudorm-rf /tmp/hive sudorm-rf /var/hadoop sudorm-rf /hadoop/falcon sudorm-rf /tmp/hadoop sudorm-rf /tmp/hadoop-hdfs sudorm-rf /usr/hdp sudorm-rf /usr/hadoop sudorm-rf /opt/hadoop sudorm-rf /tmp/hadoop sudorm-rf /var/hadoop sudorm-rf /hadoop sudorm-rf /usr/bin/worker-lanucher sudorm-rf /usr/bin/zookeeper-client sudorm-rf /usr/bin/zookeeper-server sudorm-rf /usr/bin/zookeeper-server-cleanup sudorm-rf /usr/bin/yarn sudorm-rf /usr/bin/storm sudorm-rf /usr/bin/storm-slider sudorm-rf /usr/bin/worker-lanucher sudorm-rf /usr/bin/storm sudorm-rf /usr/bin/storm-slider sudorm-rf /usr/bin/sqoop sudorm-rf /usr/bin/sqoop-codegen sudorm-rf /usr/bin/sqoop-create-hive-table sudorm-rf /usr/bin/sqoop-eval sudorm-rf /usr/bin/sqoop-export sudorm-rf /usr/bin/sqoop-help sudorm-rf /usr/bin/sqoop-import sudorm-rf /usr/bin/sqoop-import-all-tables sudorm-rf /usr/bin/sqoop-job sudorm-rf /usr/bin/sqoop-list-databases sudorm-rf /usr/bin/sqoop-list-tables sudorm-rf /usr/bin/sqoop-merge sudorm-rf /usr/bin/sqoop-metastore sudorm-rf /usr/bin/sqoop-version sudorm-rf /usr/bin/slider sudorm-rf /usr/bin/ranger-admin-start sudorm-rf /usr/bin/ranger-admin-stop sudorm-rf /usr/bin/ranger-kms sudorm-rf /usr/bin/ranger-usersync-start sudorm-rf /usr/bin/ranger-usersync-stop sudorm-rf /usr/bin/pig sudorm-rf /usr/bin/phoenix-psql sudorm-rf /usr/bin/phoenix-queryserver sudorm-rf /usr/bin/phoenix-sqlline sudorm-rf /usr/bin/phoenix-sqlline-thin sudorm-rf /usr/bin/oozie sudorm-rf /usr/bin/oozied.sh sudorm-rf /usr/bin/mapred sudorm-rf /usr/bin/mahout sudorm-rf /usr/bin/kafka sudorm-rf /usr/bin/hive sudorm-rf /usr/bin/hiveserver2 sudorm-rf /usr/bin/hbase sudorm-rf /usr/bin/hcat sudorm-rf /usr/bin/hdfs sudorm-rf /usr/bin/hadoop sudorm-rf /usr/bin/flume-ng sudorm-rf /usr/bin/falcon sudorm-rf /usr/bin/beeline sudorm-rf /usr/bin/atlas-start sudorm-rf /usr/bin/atlas-stop sudorm-rf /usr/bin/accumulo
如有遗漏自行添加
如果安装过程中在某个组件报错,那么直接删除对应组件的所有东西,重新安装即可,比如:symlink target /usr/hdp/current/oozie-client for oozie already exists and it is not a symlink.
删除安装包:
sudo yum remove oozie_2* -y
找到oozie的rpm包
rpm -qa|grep oozie
删除包
sudo rpm -e oozie_2_6_5_0_292-4.2.0.2.6.5.0-292.noarch
如果报错:/var/tmp/rpm-tmp.YhTbCT: line 1: pushd: /usr/hdp/2.6.5.0-292/oozie: No such file or directory
先创建目录:
sudomkdir –p /usr/hdp/2.6.5.0-292/oozie
再删除:
sudo rpm -e oozie_2_6_5_0_292-4.2.0.2.6.5.0-292.noarch
删除目录:
sudorm-rf /usr/hdp/current/oozie-client/ sudorm-rf /usr/hdp/2.6.5.0-292/oozie sudorm-rf /etc/oozie/ sudorm-rf /var/lib/oozie/ sudorm-rf /var/log/oozie/ sudo userdel oozie sudorm-rf /home/oozie
THE END