我正在尝试从多个s3存储桶中读取文件。
最初桶应该在不同的区域,但看起来这是不可能的。
所以现在我已经将另一个桶复制到与要读取的第一个桶相同的区域,这与我正在执行spark作业的区域相同。
SparkSession设置:
val sparkConf = new SparkConf()
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.registerKryoClasses(Array(classOf[Event]))
SparkSession.builder
.appName("Merge application")
.config(sparkConf)
.getOrCreate()
使用创建SparkSession中的SQLContext调用的函数:
private def parseEvents(bucketPath: String, service: String)(
implicit sqlContext: SQLContext
): Try[RDD[Event]] =
Try(
sqlContext.read
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.json(bucketPath)
.toJSON
.rdd
.map(buildEvent(_, bucketPath, service).get)
)
主流程:
for {
bucketOnePath <- buildBucketPath(config.bucketOne.name)
_ <- log(s"Reading events from $bucketOnePath")
bucketOneEvents: RDD[Event] <- parseEvents(bucketOnePath, config.service)
_ <- log(s"Enriching events from $bucketOnePath with originating region data")
bucketOneEventsWithRegion: RDD[Event] <- enrichEventsWithRegion(
bucketOneEvents,
config.bucketOne.region
)
bucketTwoPath <- buildBucketPath(config.bucketTwo.name)
_ <- log(s"Reading events from $bucketTwoPath")
bucketTwoEvents: RDD[Event] <- parseEvents(config.bucketTwo.name, config.service)
_ <- log(s"Enriching events from $bucketTwoPath with originating region data")
bucketTwoEventsWithRegion: RDD[Event] <- enrichEventsWithRegion(
bucketTwoEvents,
config.bucketTwo.region
)
_ <- log("Merging events")
mergedEvents: RDD[Event] <- merge(bucketOneEventsWithRegion, bucketTwoEventsWithRegion)
if mergedEvents.isEmpty() == false
_ <- log("Grouping merged events by partition key")
mergedEventsByPartitionKey: RDD[(EventsPartitionKey, Iterable[Event])] <- eventsByPartitionKey(
mergedEvents
)
_ <- log(s"Storing merged events to ${config.outputBucket.name}")
_ <- store(config.outputBucket.name, config.service, mergedEventsByPartitionKey)
} yield ()
我在日志中得到的错误(实际存储桶名称已更改,但实际名称确实存在):
19/04/09 13:10:20 INFO SparkContext: Created broadcast 4 from rdd at MergeApp.scala:141
19/04/09 13:10:21 INFO FileSourceScanExec: Planning scan with bin packing, max size: 134217728 bytes, open cost is considered as scanning 4194304 bytes.
org.apache.spark.sql.AnalysisException: Path does not exist: hdfs:someBucket2
我的stdout日志显示主要代码在失败之前走了多远:
Reading events from s3://someBucket/////*.gz
Enriching events from s3://someBucket/////*.gz with originating region data
Reading events from s3://someBucket2/////*.gz
Merge failed: Path does not exist: hdfs://someBucket2
奇怪的是,无论我选择哪个桶,第一次读取总是有效。但是第二次读取总是失败,无论是什么桶。这告诉我水桶没什么问题,但是在使用多个s3水桶时会有些奇怪。
我只能看到从单个s3存储桶读取多个文件的线程,而不是来自多个s3存储桶的多个文件。
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。