开发者社区> 问答> 正文

本地测试 flink 1.10 hbase sql create table 在反序列化byte之后

job graph 阶段 HBaseRowInputFormat.java this.conf = {Configuration@4841} "Configuration: core-default.xml, core-site.xml, hbase-default.xml, hbase-site.xml" quietmode = true allowNullValueProperties = false resources = {ArrayList@4859} size = 2 finalParameters = {Collections$SetFromMap@4860} size = 0 loadDefaults = true updatingResource = {ConcurrentHashMap@4861} size = 343 properties = {Properties@4862} size = 343 overlay = {Properties@4863} size = 2 classLoader = {Launcher$AppClassLoader@4864}

Executor job 阶段 InstantiationUtil.java readObjectFromConfig userCodeObject = {HBaseRowInputFormat@13658} tableName = "test_shx" schema = {HBaseTableSchema@13660} conf = null readHelper = null endReached = false table = null scan = null resultScanner = null currentRow = null scannedRows = 0 runtimeContext = null

恳请各位大神相帮 来自志愿者整理的flink邮件归档来自志愿者整理的FLINK邮件归档

展开
收起
小阿怪 2021-12-04 20:45:50 465 0
1 条回答
写回答
取消 提交回答
  • Please do not send user question to dev@flink.apache.org ma...@flink.apache.org, dev@flink.apache.org ma...@flink.apache.org is used for development discussion and only accept English from convenience consideration. dev-subscribe@flink.apache.org ma...@flink.apache.org mail only used for subscribing mails from dev@flink.apache.org ma...@flink.apache.org [1], you do not need CC it too.

    Just send your question to user-zh@flink.apache.org ma...@flink.apache.org is enough.

    Back to your question, could you post your SQL or program that can reproduce the null issue rather than only a debug information? And we can keep communication in user-zh@flink.apache.org ma...@flink.apache.org if you prefer Chinese.

    来自志愿者整理的flink邮件归档来自志愿者整理的FLINK邮件归档

    2021-12-04 22:35:27
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
SQL Server 2017 立即下载
GeoMesa on Spark SQL 立即下载
原生SQL on Hadoop引擎- Apache HAWQ 2.x最新技术解密malili 立即下载