开发者社区> 问答> 正文

在进行table转streaming后报一个异常

场景:将table表转为streaming流进行一个关联维表操作后发生异常

异常内容: 2021-11-17 19:39:53.056|ERROR|org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition|flink-taskexecutor-io-thread-4|releaseInternal|233|Error during release of result subpartition: C:\Users\monster\AppData\Local\Temp\flink-netty-shuffle-76964c7f-ffc6-490f-82bc-a3091412955b\0be871a1d3b522377ffd8f565bb3b81f.channel java.nio.file.NoSuchFileException: C:\Users\monster\AppData\Local\Temp\flink-netty-shuffle-76964c7f-ffc6-490f-82bc-a3091412955b\0be871a1d3b522377ffd8f565bb3b81f.channel     at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)     at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)     at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)     at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)     at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)     at java.nio.file.Files.delete(Files.java:1126)     at org.apache.flink.runtime.io.network.partition.FileChannelBoundedData.close(FileChannelBoundedData.java:97)     at org.apache.flink.runtime.io.network.partition.BoundedBlockingSubpartition.checkReaderReferencesAndDispose(BoundedBlockingSubpartition.java:253)     at org.apache.flink.runtime.io.network.partition.BoundedBlockingSubpartition.release(BoundedBlockingSubpartition.java:205)     at org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.releaseInternal(BufferWritingResultPartition.java:229)     at org.apache.flink.runtime.io.network.partition.ResultPartition.release(ResultPartition.java:246)     at org.apache.flink.runtime.io.network.partition.ResultPartitionManager.releasePartition(ResultPartitionManager.java:86)     at org.apache.flink.runtime.io.network.NettyShuffleEnvironment.lambda$releasePartitionsLocally$0(NettyShuffleEnvironment.java:181)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:748)

陈 *来自志愿者整理的flink邮件归档

展开
收起
moonlightdisco 2021-12-07 16:51:09 821 0
1 条回答
写回答
取消 提交回答
  • Hi!

    这种文件用于在 task 之间交换数据。我对 windows 的行为不太了解,但看起来是这个临时文件被清理了。是否有设置什么自动清理策略?另外这样的错误如果只是偶发,Flink 的 failover 机制会让作业从 checkpoint 重新运行,不必担心作业的可用性和正确性。*来自志愿者整理的flink邮件归档

    2021-12-07 17:27:07
    赞同 展开评论 打赏
问答地址:
问答排行榜
最热
最新

相关电子书

更多
Spark SQL最佳实践 立即下载
Flink SQL in 2020 立即下载
Flink Streaming SQL 2018 立即下载