文章目录
- 2. Hadoop搭建
1) 解压以及配置环境变量
2) 具体参考官网的搭建单节点的章节
3) configuration**![在这里插入图片描述](https://ucc.alicdn.com/images/user-upload-01/50b94cc31fbe4167a8ddd22c5a8aaba7.png)
4) ` 配置核心配置文件指定启动NN`
5)`指定副本以及路径`
6) 配置DN的启动节点
7) 格式化与启动
8) ui访问
1.虚拟机基础配置
1)网络配置
编辑文件 vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
#HWADDR=00:0C:29:42:15:C2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=ip
NETMASK=255.255.255.0
GATEWAY=网关
DNS1=223.5.5.5
DNS2=114.114.114.114
2)虚拟机主机名配置
编辑文件vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node01
3)虚拟机ip与主机映射
vi /etc/hosts
本机IP node01
4)关闭防火墙
#查看防火墙状态
systemctl status firewalld.service
#关闭防火墙
systemctl stop firewalld.service
#开机自动关闭
chkconfig iptables off
#关闭 selinux
vi /etc/selinux/config
SELINUX=disabled
5)做时间同步
#安装时间同步服务
yum install ntp -y
#修改配置
vi /etc/ntp.conf
server ntp1.aliyun.com
#启动同步服务
service ntpd start
#开启开机自启动
chkconfig ntpd on
6) JDK1.8安装
卸载已安装的jdk
rpm -qa|grep java
rpm -e --nodeps xxx
下载好安装包,jdk-8u181-linux-x64.rpm
rpm直接解压即可
rz命令
将本地文件传入服务器。
rpm -i 安装jdk
rpm -i jdk-8u181-linux-x64.rpm
配置环境变量
vi /etc/profile
export JAVA_HOME=/usr/java/default
export PATH=$PATH:$JAVA_HOME/bin
配置文件生效
source /etc/profile
2. Hadoop搭建
1) 解压以及配置环境变量
解压
tar xf hadoop-2.6.5.tar.gz
配置环境变量
export JAVA_HOME=/usr/java/default
export HADOOP_HOME=/opt/bigdata/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
2) 具体参考官网的搭建单节点的章节
Hadoop: Setting up a Single Node Cluster.
配置hadoop的角色:
In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:
set to the root of your Java installation
export JAVA_HOME=/usr/java/default
3) configuration**
4) 配置核心配置文件指定启动NN
vi /etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
5)指定副本以及路径
vi etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
6) 配置DN的启动节点
vi slaves
node01
7) 格式化与启动
Format the filesystem:
$ bin/hdfs namenode -format
Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
启动
start-dfs.sh
Starting namenodes on [node01]
node01: starting namenode, logging to /opt/bigdata/hadoop-2.6.5/logs/hadoop-root-namenode-node01.out
node01: starting datanode, logging to /opt/bigdata/hadoop-2.6.5/logs/hadoop-root-datanode-node01.out
Starting secondary namenodes [node01]
node01: starting secondarynamenode, logging to /opt/bigdata/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-node01.out
8) ui访问
http://node01:50070/explorer.html#/
3 简单使用
[root@node01 hadoop]# hdfs dfs -mkdir /bigdata
[root@node01 hadoop]# hdfs dfs -mkdir -p /data/local