背景是这样的:
我原有4个节点组成的cassandra集群(3.11.4),java使用的是1.8,目前正在扩容,拟新增4个节点
扩容过程:
前2个新节点扩容时一切正常,第3个节点扩容时失败,具体报错信息是:
java.io.IOException: net.jpountz.lz4.LZ4Exception: Error decoding offset 24121 of input buffer at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:142) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:163) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.compress.CompressedInputStream.decompressNextChunk(CompressedInputStream.java:109) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:121) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.io.FilterInputStream.read(FilterInputStream.java:83) ~[na:1.8.0_322] at org.apache.cassandra.io.util.TrackedInputStream.read(TrackedInputStream.java:51) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:288) ~[na:1.8.0_322] at org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:225) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:644) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:609) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1221) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1176) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.Columns.apply(Columns.java:396) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:605) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:480) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:436) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:95) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:73) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.StreamReader$StreamDeserializer.hasNext(StreamReader.java:262) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:110) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:173) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.io.sstable.format.RangeAwareSSTableWriter.append(RangeAwareSSTableWriter.java:102) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.StreamReader.writePartition(StreamReader.java:171) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:107) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:54) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:43) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:61) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:311) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_322] Caused by: net.jpountz.lz4.LZ4Exception: Error decoding offset 24121 of input buffer at net.jpountz.lz4.LZ4JNIFastDecompressor.decompress(LZ4JNIFastDecompressor.java:39) ~[lz4-1.3.0.jar:na] at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:137) ~[apache-cassandra-3.11.4.jar:3.11.4] ... 31 common frames omitted
想寻求帮助,导致这个问题的原因大概是什么,如何解决?
cassandra(3.11.4)扩容时,部分节点报错LZ4解压错误 cassandra(3.11.4)扩容时,部分节点报错LZ4解压错误 cassandra(3.11.4)扩容时,部分节点报错LZ4解压错误 cassandra(3.11.4)扩容时,部分节点报错LZ4解压错误
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
阿里云NoSQL数据库提供了一种灵活的数据存储方式,可以支持各种数据模型,包括文档型、图型、列型和键值型。此外,它还提供了一种分布式的数据处理方式,可以支持高可用性和容灾备份。包含Redis社区版和Tair、多模数据库 Lindorm、MongoDB 版。