在使用 ParquetAvroWriters.forGenericRecord(Schema schema) 写parquet文件的时候 出现 类转化异常: 下面是我的代码:
// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo); testDataStream.print().setParallelism(1); ArrayList<org.apache.avro.Schema.Field> fields = new ArrayList<org.apache.avro.Schema.Field>(); fields.add(new org.apache.avro.Schema.Field("id", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", JsonProperties.NULL_VALUE)); fields.add(new org.apache.avro.Schema.Field("time", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", JsonProperties.NULL_VALUE)); org.apache.avro.Schema parquetSinkSchema = org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", true, fields); String fileSinkPath = "./xxx.text/rs6/"; StreamingFileSink parquetSink = StreamingFileSink. forBulkFormat(new Path(fileSinkPath), ParquetAvroWriters.forGenericRecord(parquetSinkSchema)) .withRollingPolicy(OnCheckpointRollingPolicy.build()) .build(); testDataStream.addSink(parquetSink).setParallelism(1); flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");
下面是异常:
09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord
at org.apache.avro.generic.GenericData.getField(GenericData.java:697)
at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)
at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52) at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:748)09:29:50,284
INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289
INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Sink: Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)
switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord at org.apache.avro.generic.GenericData.getField(GenericData.java:697) at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188) at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)
请问是否是使用方式不对?还是什么问题?
*来自志愿者整理的flink邮件归档
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
从提供的错误信息来看,问题在于你的Flink DataStream中的数据类型与你为ParquetAvroWriter指定的GenericRecord
类型不匹配。具体来说,错误提示是尝试将org.apache.flink.api.java.tuple.Tuple2
类型的元素转换为org.apache.avro.generic.IndexedRecord
时发生了ClassCastException
。
在你的代码中,你定义了一个基于Tuple的TypeInfo:
TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
这表示你的DataStream中的每个元素是一个包含两个字符串字段的Tuple2。然而,你在创建ParquetAvroWriter时使用了forGenericRecord(parquetSinkSchema)
,期望的是一个符合Avro Schema的GenericRecord
实例。
要解决这个问题,你需要确保你的DataStream的数据结构与Parquet文件所期待的记录结构相匹配。一种方法是将Flink的Tuple数据转换成Avro的GenericRecord
。这里是一个简化的示例说明如何进行转换:
public class TupleToGenericRecordConverter implements MapFunction<Tuple2<String, String>, GenericRecord> {
private final Schema schema;
public TupleToGenericRecordConverter(Schema schema) {
this.schema = schema;
}
@Override
public GenericRecord map(Tuple2<String, String> value) throws Exception {
GenericRecord record = new GenericData.Record(schema);
record.put("id", value.f0);
record.put("time", value.f1);
return record;
}
}
DataStream<Tuple2<String, String>> originalStream = ...; // 你的原始DataStream
// 应用转换
DataStream<GenericRecord> convertedStream = originalStream.map(new TupleToGenericRecordConverter(parquetSinkSchema));
// 然后使用convertedStream来创建sink
StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink
.forBulkFormat(new Path(fileSinkPath), ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
.withRollingPolicy(OnCheckpointRollingPolicy.build())
.build();
convertedStream.addSink(parquetSink).setParallelism(1);
这样,你就将原始的Tuple2数据转换成了与Parquet Avro Writer兼容的GenericRecord格式,应该可以避免上述的类转换异常。