开发者社区> 问答> 正文

写parquet文件的时候 出现 类转化异常如何解决?

在使用 ParquetAvroWriters.forGenericRecord(Schema schema) 写parquet文件的时候 出现 类转化异常: 下面是我的代码:

// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo); testDataStream.print().setParallelism(1); ArrayList<org.apache.avro.Schema.Field> fields = new ArrayList<org.apache.avro.Schema.Field>(); fields.add(new org.apache.avro.Schema.Field("id", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", JsonProperties.NULL_VALUE)); fields.add(new org.apache.avro.Schema.Field("time", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", JsonProperties.NULL_VALUE)); org.apache.avro.Schema parquetSinkSchema = org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", true, fields); String fileSinkPath = "./xxx.text/rs6/"; StreamingFileSink parquetSink = StreamingFileSink. forBulkFormat(new Path(fileSinkPath), ParquetAvroWriters.forGenericRecord(parquetSinkSchema)) .withRollingPolicy(OnCheckpointRollingPolicy.build()) .build(); testDataStream.addSink(parquetSink).setParallelism(1); flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");

下面是异常:

09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord

at org.apache.avro.generic.GenericData.getField(GenericData.java:697)

at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)

at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)

at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)

at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52) at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)

at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)

at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)

at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)

at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)

at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)

at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)

at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)

at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)

at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)

at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)

at java.lang.Thread.run(Thread.java:748)09:29:50,284

INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289

INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Sink: Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)

switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord at org.apache.avro.generic.GenericData.getField(GenericData.java:697) at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188) at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

请问是否是使用方式不对?还是什么问题?

*来自志愿者整理的flink邮件归档

展开
收起
游客nnqbtnagn7h6s 2021-12-06 19:07:10 731 0
1 条回答
写回答
取消 提交回答
  • 你好,我试了一下,纯DataStream的方式是可以使用的,具体使用参考flink-formats\flink-parquet\src\test\java\org\apache\flink\formats\parquet\avro\ParquetStreamingFileSinkITCase

    在Table转DataStream的方式中,我是先将Table转换为DataStream[Row],然后再进行转换生成DataStream[GenericRecord] dataStream.map(x => { ...val fields = new util.ArrayList[Schema.Field] fields.add(new Schema.Field("platform", create(org.apache.avro.Schema.Type.STRING), "platform", null)) fields.add(new Schema.Field("event", create(org.apache.avro.Schema.Type.STRING), "event", null)) fields.add(new Schema.Field("dt", create(org.apache.avro.Schema.Type.STRING), "dt", null)) val parquetSinkSchema: Schema = createRecord("pi", "flinkParquetSink", "flink.parquet", true, fields) val record = new GenericData.Record(parquetSinkSchema).asInstanceOf[GenericRecord] record.put("platform", x.get(0)) record.put("event", x.get(1)) record.put("dt", x.get(2)) record })

    *来自志愿者整理的flink邮件归档

    2021-12-06 21:13:47
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
HBase2.0重新定义小对象实时存取 立即下载
Apache Flink 流式应用中状态的数据结构定义升级 立即下载
《Apache Flink-重新定义计算》PDF下载 立即下载