配置Hbase之前需要先配置一下ZooKeeper.
在每台服务器的hosts文件中添加
192.168.111.200 master
192.168.111.199 slave1
192.168.111.198 slave2
如果在之前装集群的时候已经配置过了那么就不用再配置了
随便在某一台上解压zookeeper文件
zoopeeker文件可在官网上下载
下载好将压缩文件上传到任意一台linux系统上
tar -zxvf zookeeper-3.4.8.tar.gz
解压完之后配置/etc/profile文件(我是将文件解压到了/opt/SofteWare文件下)
#zookeeper
export ZOOKEEPER=/opt/SoftWare/zookeeper/zookeeper-3.4.8
export PATH=$PATH:$ZOOKEEPER/bin
然后执行source /etc/profile指令 更新
进入刚解压完的文件下的conf目录下
mkdir zoo.cfg文件
然后 cp zoo_sample.cfg zoo.cfg
修改里面的dataDir =//opt/SoftWare/zookeeper/zookeeper-3.4.8/data
添加
server=master:2888:3888
server=slave1:2888:3888
server=slave2:2888:3888
配置完之后将上述所有内容考到其他节点上
scp -r /opt/SoftWare/zookeeper root@salve1:/opt/SoftWare/
scp -r /opt/SoftWare/zookeeper root@salve2:/opt/SoftWare/
三台机器zookeeper-3.4.8/下面都创建data文件夹
然后再data文件夹中创建一个文件myid
然后里面写上相应的数字 如 master 下 写1
slave1 写 2
salve2 写 3
然后将 /etc/profile文件也按照上述步骤传给各节点 保持文件里面的内容一致
进行到这zookeeper就算配置好了
启动zookeeper zkServer.sh start
查看zookeeper服务状态zkServer.sh status
使用jps查询
结果为QuorumPeerMain
装完zookeeper之后我们来安装HBase
HBase安装配置
下载hbase文件
在任意一台上解压hbase的压缩文件 如在192.168.111.200
tar -xvf hbase-1.2.0-bin.tar.gz
配置添加环境变量到/etc/profile
#hbase
export HBASE_HOME=/opt/SoftWare/Hbase/hbase-1.2.0
export PATH=$PATH:$HBASE_HOME/bin
source /etc/profile
进入hbase的conf目录下 修改三个文件
hbase-env.sh
hbase-site.xml
regionservers
其中hbase-env.sh中 在文档的十多行位置处添加:
# The java implementation to use. Java 1.7+ required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/opt/SoftWare/Hbase/hbase-1.2.0
# Extra Java CLASSPATH elements. Optional.
# export HBASE_CLASSPATH=
然后在后面添加:
# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false
重点在于两句话
**
export JAVA_HOME=/opt/SoftWare/Hbase/hbase-1.2.0
export HBASE_MANAGES_ZK=false
**
hbase-site.xml中
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,salve1,salve2</value>
<description>The directory shared by RegionServers.</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/SoftWare/Hbase/hbase-1.2.0/zookeeperdata</value>
<description>Property from ZooKeeper config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/opt/SoftWare/Hbase/hbase-1.2.0/tmpdata</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs:/master:9000/hbase</value>
<description>The directory shared by RegionServers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
</configuration>
regionservers中添加各个从属服务器的ip或者hostname
master
salve1
salve2
(以个人集群名称更改value中的值 不难理解)
保存后分别把hbase的整个文件夹拷贝到其他服务器:
scp /opt/Software/Hbase/hbase-1.2.0 root@slave1 /opt/Software/
scp /opt/Software/Hbase/hbase-1.2.0 root@slave2 /opt/Software/
在hadoop的namenode节点上启动hbase服务
start-hbase.sh
启动后:jps
HRegionServer
HMaster
子节点
HRegionServer
启动顺序
Hadoop-hdfs------->hadoop-yarn------>zookeeper-------->hbase