一 ansible的离线安装
链接:https://pan.baidu.com/s/1dKlwtLWSOKoMkanW900n9Q
提取码:ansi
相关安装方法详见本人的另一个博客:ansible2.9.18 centos7 x86_64系统下的完全rpm离线安装按照以上博客安装即可 ,本例安装在192.168.0.16上
首先需要说明的是,本次安装教程使用三台虚vm拟机搭建Hadoop集群,相关服务器的具体信息如下:
192.168.0.16 | 4G内存,4CPU,100G硬盘 |
192.168.0.17 | 4G内存,4CPU,100G硬盘 |
192.168.0.18 | 4G内存,4CPU,100G硬盘 |
Hadoop集群的部署计划为:192.168.0.16设置为主节点,192.168.0.17和192.168.0.18设置为副节点。主节点安装ambari,并安装Hadoop的hbase,hdfs,zookeeper,kafka,从节点安装hbase,hdfs,zookeeper,kafka
二,集群搭建前的注意事项和相关安装包的下载
**(1)三台服务器的防火墙和selinux的关闭
在三台服务器上都执行以下命令:
systemctl disable firewalld && systemctl stop firewalld 编辑 /etc/selinux/config 将该文件的SELINUX=enforcing这一行修改为SELINUX=disabled
如果是使用ansible,那么,命令是这样的(在192.168.0.16):
[root@master html]# ansible all -m shell -a 'systemctl disable firewalld' 192.168.0.18 | CHANGED | rc=0 >> Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 192.168.0.17 | CHANGED | rc=0 >> 192.168.0.16 | CHANGED | rc=0 >> Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@master html]# ansible all -m shell -a 'systemctl stop firewalld' 192.168.0.17 | CHANGED | rc=0 >> 192.168.0.18 | CHANGED | rc=0 >> 192.168.0.16 | CHANGED | rc=0 >>
(2)ambari和HDP的下载链接
ambari和HDP
提取码:hado
下载的文件上传到主服务器192.168.0.16的root目录下,以作备用
三,集群的免密配置和ansible的配置以及域名的设定
(1)ansible的配置
vim /etc/ansible/hosts 在文件末尾添加如下内容:
[hadoopcluster] 192.168.0.16 192.168.0.17 192.168.0.18
(2)免密配置
这个配置有点意思了,我们要达到的效果是三台服务器任意一台登录其它服务器都是免密状态,因此,配置思路是在每一台服务器上生成公私钥,然后将三个authorized_keys的内容合并到一起就可以了,具体操作如下:
在192.168.0.16 这台服务器上,先生成公私钥,然后ssh-copy-id 到自身,也就是执行这么两个命令:
ssh-keygen -t rsa
ssh-copy-id 192.168.0.16
此时,我们观察vim ~/.ssh/authorized_keys,可以看到只有一台服务器的信息。在其余两台服务器同样操作后,将authorized_keys 的内容合并(建议使用xshell复制),最终的vim ~/.ssh/authorized_keys内容应该如下(可以看到包含有root@master,root@slave1,root@slave2就可以了):
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDkAAuVE1YSwR6aBTpfaOK5PsG3p+FBs4w0/L6pe7lPu/yGAR7TtJgYA/5u9Vf8wIdVtrnZlDXq7bUkgR4U6tWJJDXxxE3kd4MUOT1XNpJwzPjdlIEkB0iDwZPMJhDWuEVrP/ITMkurz1RgUnhfRdFcJa/fWCRiKgNiqT6/OA9iqjCA/I1Yr/iiPVKufmEn31IL7vzsXGDtDD87wXgVySBC1H5xSfO0QG/OIasBiRjg/1ugYH0jKEL69n9i8jK/A8IEki0Y4K2GqeFsYvsVHpKkdz0juNQbQDa7NXYlcCdIccpfvMxlpp+SePWZzZTILdLtCH9hmalJ8jIna+dw75BR root@master ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzgFsUC4wlM8L9lKit2o6vuJU3i4qC53OfY2Tx9T+jTCx9R0Qf7chtLyIB49d29fVKYvu4/e4nD7thrPLFosh05fuhjb+NmaIq1XLcz14Qta6DcZMJqdhlOXjg4bKZ1QXQ/0GRgBZ0jcaIHQpQVFRFaD/WWZ1o4/d7tpPn6OxAKtL+WDXZbBhCaUUG8M9ESlF6ukGGIqUoUNFS1ejSLzxNMNcpp8TJ5l8w6i5XMPthGq64muMbnM3TiO0qNse9a2vTLncY6Jg5VrQbv7JOqUHwVcLu75xaGqD15Z6HOb5P8cIkm90Km0wZA9OVli9Gb+DzSMdLljj7BhcnByybGAV3 root@slave1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrYMcgQBPP8iyk7DuJqX8rdYE6GNiwZQWdztzhcVqYj15ZqVtEN7IUc7IlDKWF5X730aqCP7+7ag1vOxpP8+pryW74y0uMVOfLfkTiFUfseQdsFrCBfuPVooloee5uxVls+0cmoNwTehylkqCyKhoW/xUsD3iVuumSQ8jRyBsktOXjsND4scxkxA1gAl+h9xGEr+sWOe2tdzUR2tWQHHG91BfM9FzHGmKK24hjKg1Ugp2Qlw2E5S/d/vKM+1ReSIqvG1hquiY/vBSC95GW2r9PVCYEB8L7I2kKY5o7KdLMI6faHzR1PwThpFdGdA0mkiz4tixEEYSDxkJwsv4+S/dt root@slave2
将192.168.0.16的authorized_keys文件scp覆盖即可实现免密登录,当然,此时需要每一个服务器互相间都ssh免密登录一下,以将免密信息写入~/.ssh/known_hosts这个文件内。
此时的ssh登录应该是这样的:(每个服务器都可以这样互相之间免密)
[root@master ~]# ssh master Last login: Sat Jun 26 05:45:47 2021 from 192.168.0.111 [root@master ~]# logout Connection to master closed. [root@master ~]# ssh slave1 Last login: Sat Jun 26 05:45:54 2021 from 192.168.0.16 [root@master ~]# logout Connection to slave1 closed. [root@master ~]# ssh slave2 Last login: Sat Jun 26 05:46:03 2021 from 192.168.0.16 [root@master ~]# logout Connection to slave2 closed. **(3)域名的固定 在192.168.0.16 这台服务器上编辑文件: vim /etc/hosts 内容应该如下: ```powershell 192.168.0.16 master myhadoop.com 192.168.0.16 slave1 192.168.0.16 slave2
将该文件scp到其它两个服务器上,我这里绑定了一个局域网内域名 myhadoop.com 到192.168.0.16, 这个是后面安装的时候可以用到。
四 本地Hadoop离线仓库的搭建
不管是使用官方仓库还是自己挂载系统安装ISO文件先做一个仓库,安装httpd以及需要的各种依赖。保证可执行以下命令即可:
yum install gcc gcc-c++ openssl openssl-devel zlib-devel bzip2-devel httpd -y
httpd安装完毕后,启动httpd:
systemctl enable httpd && systemctl start httpd
第二步所下载的文件假设是在root目录下,安装两个RPM包,解压四个压缩文件到 /var/www/html 目录下,
cd ambari/
rpm -ivh libtirpc-0.2.4-0.16.el7.src.rpm --force
rpm -ivh libtirpc-devel-0.2.4-0.16.el7.x86_64.rpm --nodeps --force
tar zxf ambari-2.7.0.0-centos7.tar.gz -C /var/www/html/
tar zxf HDP-3.0.0.0-centos7-rpm.tar.gz -C /var/www/html/
tar zxf HDP-UTILS-1.1.0.22-centos7.tar.gz -C /var/www/html/
tar zxf HDP-GPL-3.0.0.0-centos7-ppc-gpl.tar.gz -C /var/www/html/
编写三个仓库文件,三个仓库文件内容分别为:
[root@master ~]# cat /etc/yum.repos.d/ambari.repo [ambari] name=ambari baseurl=http://192.168.0.16/ambari/centos7/2.7.0.0-897/ gpgcheck=1 gpgkey=http://192.168.0.16/ambari/centos7/2.7.0.0-897/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enable=1 priority=1 [root@master ~]# cat /etc/yum.repos.d/hdp.gpl.repo #VERSION_NUMBER=3.0.0.0-1634 [HDP-GPL-3.0.0.0] name=HDP-GPL Version - HDP-GPL-3.0.0.0 baseurl=http://192.168.0.16/HDP-GPL/centos7-ppc/3.0.0.0-1634/ gpgcheck=1 gpgkey=http://192.168.0.16/HDP-GPL/centos7-ppc/3.0.0.0-1634/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [root@master ~]# cat /etc/yum.repos.d/hdp.repo #VERSION_NUMBER=3.0.0.0-1634 [HDP-3.0.0.0] name=HDP Version - HDP-3.0.0.0 baseurl=http://192.168.0.16/HDP/centos7/3.0.0.0-1634/ gpgcheck=1 gpgkey=http://192.168.0.16/HDP/centos7/3.0.0.0-1634/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1 [HDP-UTILS-1.1.0.22] name=HDP-UTILS Version - HDP-UTILS-1.1.0.22 baseurl=http://192.168.0.16/HDP-UTILS/centos7/1.1.0.22 gpgcheck=1 gpgkey=http://192.168.0.16/HDP-UTILS/centos7/1.1.0.22/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1
拷贝三个仓库文件到从节点:
[root@master yum.repos.d]# scp ambari.repo slave1:/etc/yum.repos.d/ ambari.repo 100% 326 225.3KB/s 00:00 [root@master yum.repos.d]# scp hdp.gpl.repo slave1:/etc/yum.repos.d/ hdp.gpl.repo 100% 272 529.9KB/s 00:00 [root@master yum.repos.d]# scp hdp.repo slave1:/etc/yum.repos.d/ hdp.repo 100% 484 366.7KB/s 00:00 [root@master yum.repos.d]# scp ambari.repo slave2:/etc/yum.repos.d/ ambari.repo 100% 326 192.0KB/s 00:00 [root@master yum.repos.d]# scp hdp.gpl.repo slave2:/etc/yum.repos.d/ hdp.gpl.repo 100% 272 365.1KB/s 00:00 [root@master yum.repos.d]# scp hdp.repo slave2:/etc/yum.repos.d/ hdp.repo 100% 484 680.2KB/s 00:00
五,时间服务器的搭建
**1. 在192.168.0.16服务器上,编辑 /etc/ntp.conf,确保文件中有如下两行内容: server 127.127.1.0
prefer fudge 127.127.1.0 stratum 10
然后执行如下命令,启动服务: systemctl enable ntpd && systemctl start ntpd
在192.168.0.17服务器上,编辑
/etc/ntp.conf,确保文件中有如下一行内容: server 192.168.0.16 然后执行如下命令:
ntpdate 192.168.0.16,该命令的输出应该为:
[root@slave2 ~]# ntpdate
192.168.0.16 26 Jun 19:42:55 ntpdate[2970]: adjust time server 192.168.0.16 offset -0.000098 sec**
等待大概5-到10分钟后,在17和18服务器上,执行以下命令应该有如下输出:
[root@slave2 ~]# ntpstat synchronised to NTP server
(192.168.0.16) at stratum 12 time correct to within 20 ms
polling server every 64 s [root@slave2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
master LOCAL(0) 11 u 54 64 377 0.684 0.237 0.184
六 正式安装ambari server 和ambari agent
- 在192.168.0.16上执行: yum install ambari-server -y在192.168.0.17和192.168.0.18上执行 yum install ambari-agent -y
- 在192.168.0.16上初始化ambari-server,执行命令:ambari-server setup 详细输入如下::
[root@master yum.repos.d]# ambari-server setup Using python /usr/bin/python Setup ambari-server Checking SELinux... SELinux status is 'disabled' Customize user account for ambari-server daemon [y/n] (n)? y Enter user account for ambari-server daemon (root): Adjusting ambari-server permissions and ownership... Checking firewall status... Checking JDK... [1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8 [2] Custom JDK ============================================================================== Enter choice (1): 2 WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts. WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts. Path to JAVA_HOME: /usr/local/jdk1.8.0_20/ Validating JDK on Ambari Server...done. Check JDK version for Ambari Server... JDK version found: 8 Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server. Checking GPL software agreement... GPL License for LZO: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html Enable Ambari Server to download and install GPL Licensed LZO packages [y/n] (n)? y Completing setup... Configuring database... Enter advanced database configuration [y/n] (n)? Configuring database... Default properties detected. Using built-in database. Configuring ambari database... Checking PostgreSQL... Running initdb: This may take up to a minute. Initializing database ... OK About to start PostgreSQL Configuring local database... Configuring PostgreSQL... Restarting PostgreSQL Creating schema and user... done. Creating tables... done. Extracting system views... ambari-admin-2.7.0.0.897.jar .... Ambari repo file contains latest json url http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json, updating stacks repoinfos with it... Adjusting ambari-server permissions and ownership... Ambari Server 'setup' completed successfully. [root@master yum.repos.d]# ambari-server start Using python /usr/bin/python Starting ambari-server Ambari Server running with administrator privileges. Organizing resource files at /var/lib/ambari-server/resources... Ambari database consistency check started... Server PID at: /var/run/ambari-server/ambari-server.pid Server out at: /var/log/ambari-server/ambari-server.out Server log at: /var/log/ambari-server/ambari-server.log Waiting for server start............................................... Server started listening on 8080 DB configs consistency check: no errors and warnings were found. Ambari Server 'start' completed successfully.
4. **在17和18上启动ambari-agent,执行命令 ambari-agent restart**
[root@slave2 yum.repos.d]# ambari-agent restart Restarting ambari-agent Verifying Python version compatibility... Using python /usr/bin/python Found ambari-agent PID: 5817 Stopping ambari-agent Removing PID file at /run/ambari-agent/ambari-agent.pid ambari-agent successfully stopped Verifying Python version compatibility... Using python /usr/bin/python Checking for previously running Ambari Agent... Checking ambari-common dir... Starting ambari-agent Verifying ambari-agent process status... Ambari Agent successfully started Agent PID at: /run/ambari-agent/ambari-agent.pid Agent out at: /var/log/ambari-agent/ambari-agent.out Agent log at: /var/log/ambari-agent/ambari-agent.log