开发者社区> 问答> 正文

【Flink SQL】无法启动env.yaml

各位好,

今天在测试Flink SQL 无法启动,错误日志如下。请问下配置yaml文件的格式需要注意下什么,分割符号能否支持特殊的符号如 hive建表语句中的分隔符'\036',详细报错日志如下。

[root@server2 bin]# /home/hadoop/flink-1.7.2/bin/sql-client.sh embedded -e /home/hadoop/flink_test/env.yaml Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set. No default environment specified. Searching for '/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml'...found. Reading default environment from: file:/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml Reading session environment from: file:/home/hadoop/flink_test/env.yaml

Exception in thread "main" org.apache.flink.table.client.SqlClientException: Could not parse environment file. Cause: YAML decoding problem: while parsing a block collection in 'reader', line 2, column 2: - name: MyTableSource ^ expected , but found BlockMappingStart in 'reader', line 17, column 3: schema: ^ (through reference chain: org.apache.flink.table.client.config.Environment["tables"]) at org.apache.flink.table.client.config.Environment.parse(Environment.java:146) at org.apache.flink.table.client.SqlClient.readSessionEnvironment(SqlClient.java:162) at org.apache.flink.table.client.SqlClient.start(SqlClient.java:90) at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)

--配置文件env.yaml tables: - name: MyTableSource type: source-table update-mode: append connector: type: filesystem path: "/home/hadoop/flink_test/input.csv" format: type: csv fields: - name: MyField1 type: INT - name: MyField2 type: VARCHAR line-delimiter: "\n" comment-prefix: "#" schema: - name: MyField1 type: INT - name: MyField2 type: VARCHAR - name: MyCustomView type: view query: "SELECT MyField2 FROM MyTableSource"

Execution properties allow for changing the behavior of a table program.

execution: type: streaming # required: execution mode either 'batch' or 'streaming' result-mode: table # required: either 'table' or 'changelog' max-table-result-rows: 1000000 # optional: maximum number of maintained rows in

'table' mode (1000000 by default, smaller 1 means unlimited)

time-characteristic: event-time # optional: 'processing-time' or 'event-time' (default) parallelism: 1 # optional: Flink's parallelism (1 by default) periodic-watermarks-interval: 200 # optional: interval for periodic watermarks(200 ms by default) max-parallelism: 16 # optional: Flink's maximum parallelism (128by default) min-idle-state-retention: 0 # optional: table program's minimum idle state time max-idle-state-retention: 0 # optional: table program's maximum idle state time restart-strategy: # optional: restart strategy type: fallback # "fallback" to global restart strategy by default

Deployment properties allow for describing the cluster to which table programsare submitted to.

deployment: response-timeout: 5000*来自志愿者整理的flink邮件归档

展开
收起
雪哥哥 2021-12-07 15:52:23 901 0
1 条回答
写回答
取消 提交回答
  • format 和 schema 应该在同一层。 参考一下 flink-sql-client 测试里TableNumber1的配置文件: test-sql-client-defaults.yaml*来自志愿者整理的flink

    2021-12-07 16:22:37
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
SQL Server 2017 立即下载
GeoMesa on Spark SQL 立即下载
原生SQL on Hadoop引擎- Apache HAWQ 2.x最新技术解密malili 立即下载