Failed allocation for a5b33ab19ce246619b7756e32215edab_108590499; org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocatorException: Allocation too big size=1114202; adjust BucketCache sizes hbase.bucketcache.bucket.sizes to accomodate if size seems reasonable and you want it cached.
这个是提示我的整个bucket cache空间不够还是说它这个size超过了单个bucket的最大值?
另外regionserver上bucket cache 分配json文件中都是类似以下信息,代表什么意思呢?我发现有size比上面值还大的呀?
{ "count" : 4, "countData" : 0, "sizeData" : 0, "filename" : "0108ac83e9e14e538a3f65d83b07ffca", "size" : 262677 }
{ "count" : 68, "countData" : 0, "sizeData" : 0, "filename" : "13f6b5282309436ab3c2f8e7cc1b05ce", "size" : 4549742 }
是说超过了单个bucket的大小,这个block的大小有1M多, 如果要缓存这么大的block需要调整hbase.bucketcache.bucket.sizes的配置
/**
public synchronized long allocateBlock(int blockSize) throws CacheFullException,
BucketAllocatorException {
assert blockSize > 0;
BucketSizeInfo bsi = roundUpToBucketSizeInfo(blockSize);
if (bsi == null) {
throw new BucketAllocatorException("Allocation too big size=" + blockSize +
"; adjust BucketCache sizes " + CacheConfig.BUCKET_CACHE_BUCKETS_KEY +
" to accomodate if size seems reasonable and you want it cached.");
}
long offset = bsi.allocateBlock();
// Ask caller to free up space and try again!
if (offset < 0)
throw new CacheFullException(blockSize, bsi.sizeIndex());
usedSize += bucketSizes[bsi.sizeIndex()];
return offset;
}
throws JsonGenerationException, JsonMappingException, IOException {
CachedBlockCountsPerFile counts = new CachedBlockCountsPerFile(filename);
for (CachedBlock cb: blocks) {
counts.count++;
counts.size += cb.getSize();
BlockType bt = cb.getBlockType();
if (bt != null && bt.isData()) {
counts.countData++;
counts.sizeData += cb.getSize();
}
}
return MAPPER.writeValueAsString(counts);
}
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。