hbase监控简单实用脚本

简介: 我们以前使用过的对hbase和hdfs进行健康检查,及剩余hdfs容量告警,简单易用1.针对hadoop2的脚本:#/bin/bashbin=`dirname $0`bin=`cd $bin;pwd`STATE_OK=0STATE_WARNING=1STATE...

我们以前使用过的对hbase和hdfs进行健康检查,及剩余hdfs容量告警,简单易用

1.针对hadoop2的脚本:

#/bin/bash


bin=`dirname $0`
bin=`cd $bin;pwd`


STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4


source /etc/profile


DFS_REMAINING_WARNING=15
DFS_REMAINING_CRITICAL=5
ABNORMAL_QUERY="INCONSISTENT|CORRUPT|FAILED|Exception"


HADOOP_WEB_INTERFACE=h001.hadoop
HBASE_WEB_INTERFACE=h008.hadoop
# hbck and fsck report
output=/var/log/cluster-status
hbase hbck >> $output
hadoop fsck /apps/hbase >> $output


# check report
count=`egrep -c "$ABNORMAL_QUERY" $output`
if [ $count -eq 0 ]; then
echo "[OK] Cluster is healthy." >> $output
else
echo "[ABNORMAL] Cluster is abnormal!" >> $output


# Get the last matching entry in the report file
last_entry=`egrep "$ABNORMAL_QUERY" $output | tail -1`
echo "($count) $last_entry"


exit $STATE_CRITICAL
fi



# HDFS usage
dfs_remaining=`curl -s http://${HADOOP_WEB_INTERFACE}:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo |egrep -o "PercentRemaining.*" | egrep -o "[0-9]*\.[0-9]*"`
dfs_remaining_word="DFS Remaining%: ${dfs_remaining}%"


echo "$dfs_remaining_word" >> $output


# check HDFS usage
dfs_remaining=`echo $dfs_remaining | awk -F '.' '{print $1}'`


if [ $dfs_remaining -lt $DFS_REMAINING_CRITICAL ]; then
echo "Low DFS space. $dfs_remaining_word"
exit_status=$STATE_CRITICAL
elif [ $dfs_remaining -lt $DFS_REMAINING_WARNING ]; then
echo "Low DFS space. $dfs_remaining_word"
exit_status=$STATE_WARNING
else
echo "HBase check OK - DFS and HBase healthy. 
$dfs_remaining_word"
exit_status=$STATE_OK
fi
exit $exit_status







2.针对hadoop1的脚本:


#/bin/bash


bin=`dirname $0`
bin=`cd $bin;pwd`


STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4

source /etc/profile

DFS_REMAINING_WARNING=15
DFS_REMAINING_CRITICAL=5
ABNORMAL_QUERY="INCONSISTENT|CORRUPT|FAILED|Exception"

HADOOP_WEB_INTERFACE= hadoop的Namenode对外接口ip

# hbck and fsck report
output=/data/logs/cluster-status
$HBASE_HOME/bin/hbase hbck >> $output
$HADOOP_HOME/bin/hadoop fsck /hbase >> $output


# check report
count=`egrep -c "$ABNORMAL_QUERY" $output`
if [ $count -eq 0 ]; then
echo "[OK] Cluster is healthy." >> $output
else
echo "[ABNORMAL] Cluster is abnormal!" >> $output


# Get the last matching entry in the report file
last_entry=`egrep "$ABNORMAL_QUERY" $output | tail -1`
echo "($count) $last_entry"

exit $STATE_CRITICAL
fi


# Check RegionServer Status
dead_region_servers=`curl -s http://${HADOOP_WEB_INTERFACE}:60010/master-status | grep "Dead Region Servers" -A 500 | grep "Regions in Transition" -B 500 | egrep -o 'target="_blank">.*</a>' | awk -F">" '{print $2}' | awk -F"<" '{print $1}'`
if [ -z $dead_region_servers ];then
echo "[OK] All RegionServers is healthy." 
echo "[OK] All RegionServers is healthy." >> $output
else
echo "[ABNORMAL] the dead regionserver list:" >> $output
echo $dead_region_servers >> $output
exit $STATE_CRITICAL
fi


# HDFS usage
dfs_remaining=`curl -s http://${HADOOP_WEB_INTERFACE}:50070/dfshealth.jsp |egrep -o "DFS Remaining%.*%" | egrep -o "[0-9]*\.[0-9]*"`
dfs_remaining_word="DFS Remaining%: ${dfs_remaining}%"


echo "$dfs_remaining_word" >> $output


# check HDFS usage
dfs_remaining=`echo $dfs_remaining | awk -F '.' '{print $1}'`


if [ $dfs_remaining -lt $DFS_REMAINING_CRITICAL ]; then
echo "Low DFS space. $dfs_remaining_word"
exit_status=$STATE_CRITICAL
elif [ $dfs_remaining -lt $DFS_REMAINING_WARNING ]; then
echo "Low DFS space. $dfs_remaining_word"
exit_status=$STATE_WARNING
else
echo "HBase check OK - DFS and HBase healthy. 
$dfs_remaining_word"
exit_status=$STATE_OK
fi
exit $exit_status
相关实践学习
lindorm多模间数据无缝流转
展现了Lindorm多模融合能力——用kafka API写入,无缝流转在各引擎内进行数据存储和计算的实验。
云数据库HBase版使用教程
&nbsp; 相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情:&nbsp;https://cn.aliyun.com/product/hbase &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
6月前
|
分布式计算 监控 Hadoop
Ganglia监控Hadoop与HBase集群
Ganglia监控Hadoop与HBase集群
|
监控 分布式数据库 Hbase
《HBase in Practise 性能、监控和问题排查》电子版地址
HBase in Practise: 性能、监控和问题排查
116 0
《HBase in Practise 性能、监控和问题排查》电子版地址
|
Prometheus 分布式计算 监控
|
监控 PHP 分布式计算
|
分布式计算 监控 Hadoop
Ganglia+Hadoop+Hbase监控搭建流程
Hadoop集群基本部署完成,接下来就需要有一个监控系统,能及时发现性能瓶颈,给故障排除提供有力依据。监控hadoop集群系统好用的比较少,自身感觉ambari比较好用,但不能监控已有的集群环境,挺悲催的。
1132 0
|
分布式计算 监控 Hadoop
|
分布式计算 Hadoop 分布式数据库
启动Hadoop集群和HBase集群脚本
#!/bin/sh #echo "waring" #read NAME #等待用户输入并把输入的值付给NAME NAME=$1 #将脚本第一个参数赋给NAME #引用变量时加上"{}",是个好习惯,利于shell辨别变量边界 if [ -z ${NAME} ] ;then #默认如果为空,hadoop start echo "1.start hadoop on ${HOSTNAM
1307 0
|
分布式计算 Hadoop Shell
停止Hadoop或HBase集群的脚本
#!/bin/sh #echo "waring" #read NAME #等待用户输入并把输入的值付给NAME NAME=$1 #将脚本第一个参数赋给NAME #引用变量时加上"{}",是个好习惯,利于shell辨别变量边界 if [ -z ${NAME} ] ; then #执行脚本没有输入参数,默认关闭hadoop stop-all.sh elif [ ${NAME} = "hado
1217 0
下一篇
无影云桌面