开发者社区> 问答> 正文

表格存储批量插入数据本地测试可以,spark集群测试报错

根据阿里文档编写批量插入数据程序,本地local模式测试成功,spark集群测试失败,报错如下:
java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.

at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
at com.alicloud.openservices.tablestore.core.protocol.OtsInternalApi$Condition.getSerializedSize(OtsInternalApi.java:3485)
at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
at com.alicloud.openservices.tablestore.core.protocol.OtsInternalApi$RowInBatchWriteRowRequest.getSerializedSize(OtsInternalApi.java:23908)
at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
at com.alicloud.openservices.tablestore.core.protocol.OtsInternalApi$TableInBatchWriteRowRequest.getSerializedSize(OtsInternalApi.java:24618)
at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
at com.alicloud.openservices.tablestore.core.protocol.OtsInternalApi$BatchWriteRowRequest.getSerializedSize(OtsInternalApi.java:25237)
at com.google.protobuf.AbstractMessageLite.toByteArray(AbstractMessageLite.java:62)
at com.alicloud.openservices.tablestore.core.OperationLauncher.asyncInvokePost(OperationLauncher.java:116)
at com.alicloud.openservices.tablestore.core.BatchWriteRowLauncher.fire(BatchWriteRowLauncher.java:63)
at com.alicloud.openservices.tablestore.InternalClient.batchWriteRow(InternalClient.java:470)
at com.alicloud.openservices.tablestore.SyncClient.batchWriteRow(SyncClient.java:186)
at com.startdt.utils.PutDataToOTS.batchWriteRow(PutDataToOTS.java:52)
at com.startdt.utils.ChangeOTSDate.changeOTSDate(ChangeOTSDate.java:13)
at com.startdt.utils.DataHandle$1.call(DataHandle.java:81)
at com.startdt.utils.DataHandle$1.call(DataHandle.java:32)
at org.apache.spark.api.java.JavaRDDLike

$$ anonfun$foreachPartition$1.apply(JavaRDDLike.scala:219) at org.apache.spark.api.java.JavaRDDLike $$

anonfun$foreachPartition$1.apply(JavaRDDLike.scala:219)

at org.apache.spark.rdd.RDD

$$ anonfun$foreachPartition$1 $$

anonfun$apply$29.apply(RDD.scala:926)

at org.apache.spark.rdd.RDD

$$ anonfun$foreachPartition$1 $$

anonfun$apply$29.apply(RDD.scala:926)

at org.apache.spark.SparkContext

$$ anonfun$runJob$5.apply(SparkContext.scala:1951) at org.apache.spark.SparkContext $$

anonfun$runJob$5.apply(SparkContext.scala:1951)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

定位问题行如下:
BatchWriteRowResponse response = otsClient.batchWriteRow(batchWriteRowRequest);

展开
收起
孤狼b组 2018-08-04 13:46:15 3675 0
1 条回答
写回答
取消 提交回答
  • 努力在努力

    首先查询sprak集群是否成功,然后test一下节点是否可以连通,参数配置的对不对

    2019-07-17 22:59:51
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Hybrid Cloud and Apache Spark 立即下载
Scalable Deep Learning on Spark 立即下载
Comparison of Spark SQL with Hive 立即下载