搭建HDFS
增加主机名
我这里仅仅增加了master主机名
[root@10 /xinghl/hadoop/bin]$ cat /etc/hosts
127.0.0.1 localhost 10.0.67.101
::1 localhost 10.0.67.101
10.0.67.101 master 10.0.67.101
如果配置远程集群,则需要配置SSH
我这里就是单节点测试玩玩,所以这步骤先省略了。
解压缩hadoop到/usr目录下
创建几个需要的目录
mkdir /dfs
mkdir /dfs/name
mkdir /dfs/data
mkdir /tmp
修改配置文件,在$HADOOP_HOME/etc/hadoop下
修改hadoop-env.sh
export JAVA_HOME=/usr/java
修改slaves
我这里就是
localhost
修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.u0.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.u0.groups</name>
<value>*</value>
</property>
</configuration>
修改hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
启动hadoop
hadoop namenode -format #在$HADOOP_HOME/bin目录下
start-all.sh #在$HADOOP_HOME/sbin目录下
检查运行状态
1 使用jps命令查看
2 登录http://10.0.67.101:8088/cluster
配置SSH,支持无密码登录
# cd ~/.ssh/# ll0# ssh-keygenpublicprivatekeyintokeyforinpublickeyinkeyis75455063911250698627key's randomart image is:2048# # ll8116751610551408161055# cat id_rsa.pub >> ~/.ssh/authorized_keys# ll121408161055116751610551408161055# ssh localhostof'localhost (::1)' can't be established.keyis97068157979tocontinue'localhost' (RSA) to the list of known hosts.# exitto# ssh localhost161055302016from# exitto#
本文转自博客园xingoo的博客,原文链接:单节点部署Hadoop教程,如需转载请自行联系原博主。