[HBase]HBase安装

本文涉及的产品
云原生网关 MSE Higress,422元/月
注册配置 MSE Nacos/ZooKeeper,118元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
简介: 版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/SunnyYoona/article/details/53456433 1.
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/SunnyYoona/article/details/53456433
1. 启动Hadoop

如果没有安装Hadoop,则查看博文:http://blog.csdn.net/sunnyyoona/article/details/53454430

启动Hadoop并查看Hadoop版本:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ sbin/start-dfs.sh
  2. Starting namenodes on [localhost]
  3. localhost: starting namenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-namenode-yoona.out
  4. localhost: starting datanode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-datanode-yoona.out
  5. Starting secondary namenodes [0.0.0.0]
  6. 0.0.0.0: starting secondarynamenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-secondarynamenode-yoona.out
  7. xiaosi@yoona:~/opt/hadoop-2.7.3$ bin/hadoop version
  8. Hadoop 2.7.3
  9. Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
  10. Compiled by root on 2016-08-18T01:41Z
  11. Compiled with protoc 2.5.0
  12. From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
  13. This command was run using /home/xiaosi/opt/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar

从上可以知道Hadoop版本为2.7.3版本

2. 下载与解压

我们从官网上下载:http://www.apache.org/dyn/closer.cgi/hbase/

解压到~/opt文件夹下:

 
 
  1. xiaosi@yoona:~$ tar -zxvf hbase-1.2.4-bin.tar.gz -C opt/

重命名,便与管理:

 
 
  1. xiaosi@yoona:~$ cp hbase-1.2.4-bin.tar.gz hbase-1.2.4
3. 配置

修改hbase-env.sh,配置JDK路径以及Zookeeper:

 
 
  1. # The java implementation to use.  Java 1.7+ required.
  2. export JAVA_HOME=/home/xiaosi/opt/jdk-1.8.0
  3. # Tell HBase whether it should manage it's own instance of Zookeeper or not.
  4. export HBASE_MANAGES_ZK=true

修改hbase-site.xml

 
 
  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4. /**
  5. *
  6. * Licensed to the Apache Software Foundation (ASF) under one
  7. * or more contributor license agreements.  See the NOTICE file
  8. * distributed with this work for additional information
  9. * regarding copyright ownership.  The ASF licenses this file
  10. * to you under the Apache License, Version 2.0 (the
  11. * "License"); you may not use this file except in compliance
  12. * with the License.  You may obtain a copy of the License at
  13. *
  14. *     http://www.apache.org/licenses/LICENSE-2.0
  15. *
  16. * Unless required by applicable law or agreed to in writing, software
  17. * distributed under the License is distributed on an "AS IS" BASIS,
  18. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  19. * See the License for the specific language governing permissions and
  20. * limitations under the License.
  21. */
  22. -->
  23. <configuration>
  24.  
  25.   <property>
  26.       <name>system.username</name>
  27.       <value>xiaosi</value>
  28.   </property>
  29.   <property>
  30.       <name>hbase.rootdir</name>
  31.       <value>hdfs://localhost:9000/hbase</value>
  32.   </property>
  33.   <property>
  34.       <name>hbase.cluster.distributed</name>
  35.       <value>true</value>
  36.   </property>
  37.   <property>
  38.      <name>hbase.tmp.dir</name>
  39.      <value>/home/${system.username}/tmp/hbase</value>
  40.   </property>
  41. </configuration>
4. 设置环境变量

在/etc/profile配置文件中,创建HBASE_HOME环境变量指向hbase目录,便于以后操作:

 
   
  1. # hbase
  2. export HBASE_HOME=/home/xiaosi/opt/hbase-1.2.4
  3. export PATH=${HBASE_HOME}/bin:$PATH
5. 启动Hadoop

进入 Hadoop 主文件夹,开启NameNode和DataNode守护进程:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ cd ~
  2. xiaosi@yoona:~$ cd $HADOOP_HOME
  3. xiaosi@yoona:~/opt/hadoop-2.7.3$ sbin/start-dfs.sh
  4. Starting namenodes on [localhost]
  5. localhost: starting namenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-namenode-yoona.out
  6. localhost: starting datanode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-datanode-yoona.out
  7. Starting secondary namenodes [0.0.0.0]
  8. 0.0.0.0: starting secondarynamenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-secondarynamenode-yoona.out
6. 启动HBase

进入 HBase 主文件夹,开启HMaster和HRegionServer守护进程:

 
  
  1. xiaosi@yoona:~/opt/hbase-1.2.4$ bin/start-hbase.sh
  2. localhost: starting zookeeper, logging to /home/xiaosi/opt/hbase-1.2.4/bin/../logs/hbase-xiaosi-zookeeper-yoona.out
  3. starting master, logging to /home/xiaosi/opt/hbase-1.2.4/logs/hbase-xiaosi-master-yoona.out
  4. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
  5. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
  6. starting regionserver, logging to /home/xiaosi/opt/hbase-1.2.4/logs/hbase-xiaosi-1-regionserver-yoona.out

通过jps命令查看启动情况:

 
  
  1. xiaosi@yoona:~/opt/hbase-1.2.4$ jps
  2. 1536 Jps
  3. 915 HQuorumPeer
  4. 22886 SecondaryNameNode
  5. 22678 DataNode
  6. 1117 HRegionServer
  7. 989 HMaster
  8. 22511 NameNode
7. 进入Hbase Shell
 
  
  1. xiaosi@yoona:~/opt/hbase-1.2.4$ bin/hbase shell
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/home/xiaosi/opt/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/home/xiaosi/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. HBase Shell; enter 'help<RETURN>' for list of supported commands.
  8. Type "exit<RETURN>" to leave the HBase Shell
  9. Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016
  10. hbase(main):001:0>

走到这一步,我们已经确定HBase已经安装成功。为了最终验证,可以试试列出HBase中所有表的命令。这个动作执行了一个全程请求,从客户端应用到HBase服务器,然后返回。在Shell提示符下,输入list然后按下回车键:

 
  
  1. hbase(main):001:0> list
  2. TABLE
  3. 0 row(s) in 0.1980 seconds
  4. => []

完成安装与验证,现在可以创建表并存储一些数据了。

8. 初步使用HBase

创建一个有列族的表student,列族为info:

 
  
  1. hbase(main):002:0> create 'student', 'info'
  2. 0 row(s) in 1.3580 seconds
  3. => Hbase::Table - student
再次查看表:
 
  
  1. hbase(main):003:0> list
  2. TABLE
  3. student
  4. 1 row(s) in 0.0150 seconds
  5. => ["student"]






相关实践学习
lindorm多模间数据无缝流转
展现了Lindorm多模融合能力——用kafka API写入,无缝流转在各引擎内进行数据存储和计算的实验。
云数据库HBase版使用教程
&nbsp; 相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情:&nbsp;https://cn.aliyun.com/product/hbase &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
分布式计算 Hadoop Shell
93 hbase安装
93 hbase安装
86 0
|
SQL 分布式计算 Hadoop
Hadoop集群hbase的安装
Hadoop集群hbase的安装
209 0
|
存储 分布式计算 Hadoop
Docker-13:Docker安装Hbase
Docker环境中安装配置Hbase,并且初始化本地环境访问Hbase
1590 0
Docker-13:Docker安装Hbase
|
6月前
|
存储 Java Linux
Linux安装HBase的详细教程及常用方法
Linux安装HBase的详细教程及常用方法
654 1
|
6月前
|
SQL 分布式计算 Hadoop
Hadoop学习笔记(HDP)-Part.16 安装HBase
01 关于HDP 02 核心组件原理 03 资源规划 04 基础环境配置 05 Yum源配置 06 安装OracleJDK 07 安装MySQL 08 部署Ambari集群 09 安装OpenLDAP 10 创建集群 11 安装Kerberos 12 安装HDFS 13 安装Ranger 14 安装YARN+MR 15 安装HIVE 16 安装HBase 17 安装Spark2 18 安装Flink 19 安装Kafka 20 安装Flume
141 1
Hadoop学习笔记(HDP)-Part.16 安装HBase
|
6月前
|
Shell 分布式数据库 Apache
HBase 安装
HBase 安装
76 0
|
监控 大数据 分布式数据库
|
存储 分布式计算 Hadoop
Hadoop之Hbase安装和配置
Hadoop之Hbase安装和配置
1083 1
|
分布式计算 Hadoop 大数据
大数据平台搭建(容器环境)——HBase2.x分布式安装配置
大数据平台搭建(容器环境)——HBase2.x分布式安装配置
大数据平台搭建(容器环境)——HBase2.x分布式安装配置
|
存储 分布式计算 Java
云计算与大数据实验七 HBase的安装与基本操作
云计算与大数据实验七 HBase的安装与基本操作
693 0

相关实验场景

更多
下一篇
无影云桌面