开发者社区 问答 正文

写parquet文件的时候 出现 类转化异常如何解决?

在使用 ParquetAvroWriters.forGenericRecord(Schema schema) 写parquet文件的时候 出现 类转化异常: 下面是我的代码:

// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO); DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo); testDataStream.print().setParallelism(1); ArrayList<org.apache.avro.Schema.Field> fields = new ArrayList<org.apache.avro.Schema.Field>(); fields.add(new org.apache.avro.Schema.Field("id", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", JsonProperties.NULL_VALUE)); fields.add(new org.apache.avro.Schema.Field("time", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", JsonProperties.NULL_VALUE)); org.apache.avro.Schema parquetSinkSchema = org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", true, fields); String fileSinkPath = "./xxx.text/rs6/"; StreamingFileSink parquetSink = StreamingFileSink. forBulkFormat(new Path(fileSinkPath), ParquetAvroWriters.forGenericRecord(parquetSinkSchema)) .withRollingPolicy(OnCheckpointRollingPolicy.build()) .build(); testDataStream.addSink(parquetSink).setParallelism(1); flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");

下面是异常:

09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.09:29:50,283 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord

at org.apache.avro.generic.GenericData.getField(GenericData.java:697)

at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)

at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)

at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)

at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52) at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)

at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)

at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)

at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)

at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)

at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)

at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151) at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)

at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)

at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)

at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)

at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)

at java.lang.Thread.run(Thread.java:748)09:29:50,284

INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289

INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Sink: Unnamed (1/1) 79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)

switched from RUNNING to FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro.generic.IndexedRecord at org.apache.avro.generic.GenericData.getField(GenericData.java:697) at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188) at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

请问是否是使用方式不对?还是什么问题?

*来自志愿者整理的flink邮件归档

展开
收起
游客nnqbtnagn7h6s 2021-12-06 18:48:39 580 分享 版权
阿里云 AI 助理回答

从提供的错误信息来看,问题在于你的Flink DataStream中的数据类型与你为ParquetAvroWriter指定的GenericRecord类型不匹配。具体来说,错误提示是尝试将org.apache.flink.api.java.tuple.Tuple2类型的元素转换为org.apache.avro.generic.IndexedRecord时发生了ClassCastException

在你的代码中,你定义了一个基于Tuple的TypeInfo:

TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);

这表示你的DataStream中的每个元素是一个包含两个字符串字段的Tuple2。然而,你在创建ParquetAvroWriter时使用了forGenericRecord(parquetSinkSchema),期望的是一个符合Avro Schema的GenericRecord实例。

要解决这个问题,你需要确保你的DataStream的数据结构与Parquet文件所期待的记录结构相匹配。一种方法是将Flink的Tuple数据转换成Avro的GenericRecord。这里是一个简化的示例说明如何进行转换:

  1. 创建一个转换函数,该函数接收Tuple2并返回对应的GenericRecord。
public class TupleToGenericRecordConverter implements MapFunction<Tuple2<String, String>, GenericRecord> {
    private final Schema schema;

    public TupleToGenericRecordConverter(Schema schema) {
        this.schema = schema;
    }

    @Override
    public GenericRecord map(Tuple2<String, String> value) throws Exception {
        GenericRecord record = new GenericData.Record(schema);
        record.put("id", value.f0);
        record.put("time", value.f1);
        return record;
    }
}
  1. 应用转换,在写入Parquet之前,使用上面定义的转换函数将DataStream转换为包含GenericRecord的DataStream。
DataStream<Tuple2<String, String>> originalStream = ...; // 你的原始DataStream

// 应用转换
DataStream<GenericRecord> convertedStream = originalStream.map(new TupleToGenericRecordConverter(parquetSinkSchema));

// 然后使用convertedStream来创建sink
StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink
    .forBulkFormat(new Path(fileSinkPath), ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
    .withRollingPolicy(OnCheckpointRollingPolicy.build())
    .build();

convertedStream.addSink(parquetSink).setParallelism(1);

这样,你就将原始的Tuple2数据转换成了与Parquet Avro Writer兼容的GenericRecord格式,应该可以避免上述的类转换异常。

有帮助
无帮助
AI 助理回答生成答案可能存在不准确,仅供参考
0 条回答
写回答
取消 提交回答