上一篇中对HBase做了简单的讲解,上篇地址:
https://yq.aliyun.com/articles/376750?spm=a2c4e.11155435.0.0.a0c0cf2TT57c8
今天使用HBase shell的方式来写个例子
1. 建表:表名blog,有两个列族:‘article’和‘author’
2. 插⼊数据到表blog中,数据如下
3. 读出rowkey为“blog2”的author的name和age
4. 读出所有article的title
5. 更新“blog1”作者的age为40
6. 读出rowkey为“blog1”中author的name和age
7. 删除rowkey为“blog3”中article的tag
8. 读出所有article的title和tag,同时读出所有author的name
代码如下:
1. 建表,包含两个列族 ‘article’和‘author’
>create 'blog','article','author'
来查看下表结构
>
describe 'blog'
Table blog is ENABLED blog COLUMN FAMILIES DESCRIPTION {NAME => 'article', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_D ELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION = > 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_ SCOPE => '0'} {NAME => 'author', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DE LETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_S COPE => '0'} 2 row(s) in 0.0430 seconds
插入一行
>put 'blog','blog1','article:title','mapreduce'
>put 'blog','blog1','article:content','intorduce mapreduce'
>put 'blog','blog1','article:tag','computing'
>put 'blog','blog1','author:name','David'
>put 'blog','blog1','author:gender','male'
>put 'blog','blog1','author:age','34'
第二行
>put 'blog','blog2','article:title','hadoop'
>put 'blog','blog2','article:content','hadoop in action'
>put 'blog','blog2','article:tag','system'
>put 'blog','blog2','author:name','jim'
>put 'blog','blog2','author:gender','male'
>put 'blog','blog2','author:age','35'
第三行
>put 'blog','blog3','article:title','hdfs'
>put 'blog','blog3','article:content','princilpe'
>put 'blog','blog3','article:tag','storage'
>put 'blog','blog3','author:name','jack'
>put 'blog','blog3','author:gender','male'
>put 'blog','blog3','author:age','21'
3. 读出rowkey为“blog2”的author的name和age
>
get 'blog','blog2','author:name','author:age'
COLUMN CELL
author:age timestamp=1516592344548, value=35
author:name timestamp=1516592314918, value=jim
2 row(s) in 0.0110 seconds
4. 读出所有article的title
>scan 'blog',{COLUMNS=>'article:title'}
ROW COLUMN+CELL
blog1 column=article:title, timestamp=1516591624453, value=mapreduce
blog2 column=article:title, timestamp=1516592170858, value=hadoop
blog3 column=article:title, timestamp=1516592680840, value=hdfs
3 row(s) in 0.0400 seconds
5. 更新“blog1”作者的age为40
>put 'blog','blog1','author:age','40'
7. 删除rowkey为“blog3”中article的tag
>delete 'blog','blog3','article:tag'
8. 读出所有article的title和tag,同时读出所有author的name
>scan 'blog',{COLUMNS=>['article:title','article:tag','author:name']}
ROW COLUMN+CELL
blog1 column=article:tag, timestamp=1516591824979, value=computing
blog1 column=article:title, timestamp=1516591624453, value=mapreduce
blog1 column=author:name, timestamp=1516592031628, value=David
blog2 column=article:tag, timestamp=1516592220615, value=system
blog2 column=article:title, timestamp=1516592170858, value=hadoop
blog2 column=author:name, timestamp=1516592314918, value=jim
blog3 column=article:title, timestamp=1516592680840, value=hdfs
blog3 column=author:name, timestamp=1516592703643, value=jack
3 row(s) in 0.0230 seconds
上述所写如有不对之处,还请各位前辈指出赐教。--五维空间