各位好,
今天在测试Flink SQL 无法启动,错误日志如下。请问下配置yaml文件的格式需要注意下什么,分割符号能否支持特殊的符号如 hive建表语句中的分隔符'\036',详细报错日志如下。
[root@server2 bin]# /home/hadoop/flink-1.7.2/bin/sql-client.sh embedded -e /home/hadoop/flink_test/env.yaml Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set. No default environment specified. Searching for '/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml'...found. Reading default environment from: file:/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml Reading session environment from: file:/home/hadoop/flink_test/env.yaml
Exception in thread "main" org.apache.flink.table.client.SqlClientException: Could not parse environment file. Cause: YAML decoding problem: while parsing a block collection in 'reader', line 2, column 2: - name: MyTableSource ^ expected , but found BlockMappingStart in 'reader', line 17, column 3: schema: ^ (through reference chain: org.apache.flink.table.client.config.Environment["tables"]) at org.apache.flink.table.client.config.Environment.parse(Environment.java:146) at org.apache.flink.table.client.SqlClient.readSessionEnvironment(SqlClient.java:162) at org.apache.flink.table.client.SqlClient.start(SqlClient.java:90) at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)
--配置文件env.yaml tables: - name: MyTableSource type: source-table update-mode: append connector: type: filesystem path: "/home/hadoop/flink_test/input.csv" format: type: csv fields: - name: MyField1 type: INT - name: MyField2 type: VARCHAR line-delimiter: "\n" comment-prefix: "#" schema: - name: MyField1 type: INT - name: MyField2 type: VARCHAR - name: MyCustomView type: view query: "SELECT MyField2 FROM MyTableSource"
execution: type: streaming # required: execution mode either 'batch' or 'streaming' result-mode: table # required: either 'table' or 'changelog' max-table-result-rows: 1000000 # optional: maximum number of maintained rows in
time-characteristic: event-time # optional: 'processing-time' or 'event-time' (default) parallelism: 1 # optional: Flink's parallelism (1 by default) periodic-watermarks-interval: 200 # optional: interval for periodic watermarks(200 ms by default) max-parallelism: 16 # optional: Flink's maximum parallelism (128by default) min-idle-state-retention: 0 # optional: table program's minimum idle state time max-idle-state-retention: 0 # optional: table program's maximum idle state time restart-strategy: # optional: restart strategy type: fallback # "fallback" to global restart strategy by default
deployment: response-timeout: 5000*来自志愿者整理的flink邮件归档
format 和 schema 应该在同一层。 参考一下 flink-sql-client 测试里TableNumber1的配置文件: test-sql-client-defaults.yaml*来自志愿者整理的flink
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。