开发者社区> 问答> 正文

flink-1.11 ddl kafka-to-hive问题

hive-1.2.1

chk 已经成功了(去chk目录查看了的确有chk数据,kafka也有数据),但是hive表没有数据,我是哪里缺少了什么吗?

String hiveSql = "CREATE TABLE stream_tmp.fs_table (\n" +

" host STRING,\n" +

" url STRING," +

" public_date STRING" +

") partitioned by (public_date string) " +

"stored as PARQUET " +

"TBLPROPERTIES (\n" +

" 'sink.partition-commit.delay'='0 s',\n" +

" 'sink.partition-commit.trigger'='partition-time',\n" +

" 'sink.partition-commit.policy.kind'='metastore,success-file'" +

")";

tableEnv.executeSql(hiveSql);

tableEnv.executeSql("INSERT INTO stream_tmp.fs_table SELECT host, url, DATE_FORMAT(public_date, 'yyyy-MM-dd') FROM stream_tmp.source_table");

*来自志愿者整理的flink邮件归档

展开
收起
小阿矿 2021-12-06 16:49:13 578 0
1 条回答
写回答
取消 提交回答
  • rolling 策略配一下? https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#sink-rolling-policy-rollover-interval

    *来自志愿者整理的flink邮件归档

    2021-12-06 17:14:01
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Flink CDC Meetup PPT - 龚中强 立即下载
Flink CDC Meetup PPT - 王赫 立即下载
Flink CDC Meetup PPT - 覃立辉 立即下载