开发者社区> 问答> 正文

Spark on Yarn 使用Docker Container 400 请求报错 

Spark on Yarn 使用Docker Container Executor处理任务时出错ExitCodeException exitCode=1

运行命令:

spark-submit --class org.apache.spark.examples.SparkPi --master yarn  /opt/spark/examples/jars/spark-examples*.jar 10
输出日志:
18/06/12 19:06:58 INFO spark.SparkContext: Running Spark version 2.0.2
18/06/12 19:06:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/06/12 19:06:59 INFO spark.SecurityManager: Changing view acls to: root
18/06/12 19:06:59 INFO spark.SecurityManager: Changing modify acls to: root
18/06/12 19:06:59 INFO spark.SecurityManager: Changing view acls groups to:
18/06/12 19:06:59 INFO spark.SecurityManager: Changing modify acls groups to:
18/06/12 19:06:59 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
18/06/12 19:06:59 INFO util.Utils: Successfully started service 'sparkDriver' on port 33864.
18/06/12 19:06:59 INFO spark.SparkEnv: Registering MapOutputTracker
18/06/12 19:06:59 INFO spark.SparkEnv: Registering BlockManagerMaster
18/06/12 19:06:59 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-35e861d6-812c-4fe7-a79d-c03a0cd73b02
18/06/12 19:06:59 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/06/12 19:06:59 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/06/12 19:06:59 INFO util.log: Logging initialized @2206ms
18/06/12 19:07:00 INFO server.Server: jetty-9.2.z-SNAPSHOT
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@35fe2125{/jobs,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@94f6bfb{/jobs/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@34645867{/jobs/job,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2484f433{/jobs/job/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60b71e8f{/stages,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1255b1d1{/stages/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@464649c{/stages/stage,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c22d4f{/stages/stage/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5f59185e{/stages/pool,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60bdf15d{/stages/pool/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@47da3952{/storage,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@51e4ccb3{/storage/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@46e8a539{/storage/rdd,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@495083a0{/storage/rdd/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5fd62371{/environment,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@28a0fd6c{/environment/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b62442c{/executors,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@66629f63{/executors/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@841e575{/executors/threadDump,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@27a5328c{/executors/threadDump/json,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e5f4170{/static,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6c345c5f{/,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6b5966e1{/api,null,AVAILABLE}
18/06/12 19:07:00 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@65e61854{/stages/stage/kill,null,AVAILABLE}
18/06/12 19:07:00 INFO server.ServerConnector: Started ServerConnector@33617539{HTTP/1.1}{0.0.0.0:4040}
18/06/12 19:07:00 INFO server.Server: Started @2350ms
18/06/12 19:07:00 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/06/12 19:07:00 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.3.69:4040
18/06/12 19:07:00 INFO spark.SparkContext: Added JAR file:/opt/spark/examples/jars/spark-examples_2.11-2.0.2.jar at spark://192.168.3.69:33864/jars/spark-examples_2.11-2.0.2.jar with timestamp 1528801620126
18/06/12 19:07:00 INFO client.RMProxy: Connecting to ResourceManager at /192.168.3.69:10030
18/06/12 19:07:01 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers
18/06/12 19:07:01 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/06/12 19:07:01 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/06/12 19:07:01 INFO yarn.Client: Setting up container launch context for our AM
18/06/12 19:07:01 INFO yarn.Client: Setting up the launch environment for our AM container
18/06/12 19:07:01 INFO yarn.Client: Preparing resources for our AM container
18/06/12 19:07:01 INFO yarn.Client: Uploading resource file:/tmp/spark-384934c7-8475-4158-9058-3a862173524d/__spark_conf__1943133459971631969.zip -> hdfs://192.168.3.69:9000/user/root/.sparkStaging/application_1528797280853_0008/__spark_conf__.zip
18/06/12 19:07:01 INFO spark.SecurityManager: Changing view acls to: root
18/06/12 19:07:01 INFO spark.SecurityManager: Changing modify acls to: root
18/06/12 19:07:01 INFO spark.SecurityManager: Changing view acls groups to:
18/06/12 19:07:01 INFO spark.SecurityManager: Changing modify acls groups to:
18/06/12 19:07:01 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
18/06/12 19:07:01 INFO yarn.Client: Submitting application application_1528797280853_0008 to ResourceManager
18/06/12 19:07:01 INFO impl.YarnClientImpl: Submitted application application_1528797280853_0008
18/06/12 19:07:01 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1528797280853_0008 and attemptId None
18/06/12 19:07:02 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:02 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1528801621673
         final status: UNDEFINED
         tracking URL: http://de69:10050/proxy/application_1528797280853_0008/
         user: root
18/06/12 19:07:03 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:04 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:05 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:06 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:07 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:08 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:09 INFO yarn.Client: Application report for application_1528797280853_0008 (state: ACCEPTED)
18/06/12 19:07:10 INFO yarn.Client: Application report for application_1528797280853_0008 (state: FAILED)
18/06/12 19:07:10 INFO yarn.Client:
         client token: N/A
         diagnostics: Application application_1528797280853_0008 failed 2 times due to AM Container for appattempt_1528797280853_0008_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://de69:10050/cluster/app/application_1528797280853_0008Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch:
ExitCodeException exitCode=1: Error: No such object: container_1528797280853_0008_02_000001

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
        at org.apache.hadoop.util.Shell.run(Shell.java:456)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
        at org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor.launchContainer(DockerContainerExecutor.java:244)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1528801621673
         final status: FAILED
         tracking URL: http://de69:10050/cluster/app/application_1528797280853_0008
         user: root
18/06/12 19:07:10 INFO yarn.Client: Deleting staging directory hdfs://192.168.3.69:9000/user/root/.sparkStaging/application_1528797280853_0008
18/06/12 19:07:10 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/06/12 19:07:10 INFO server.ServerConnector: Stopped ServerConnector@33617539{HTTP/1.1}{0.0.0.0:4040}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@65e61854{/stages/stage/kill,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6b5966e1{/api,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6c345c5f{/,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1e5f4170{/static,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@27a5328c{/executors/threadDump/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@841e575{/executors/threadDump,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@66629f63{/executors/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b62442c{/executors,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@28a0fd6c{/environment/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5fd62371{/environment,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@495083a0{/storage/rdd/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@46e8a539{/storage/rdd,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@51e4ccb3{/storage/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@47da3952{/storage,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@60bdf15d{/stages/pool/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f59185e{/stages/pool,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c22d4f{/stages/stage/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@464649c{/stages/stage,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1255b1d1{/stages/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@60b71e8f{/stages,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2484f433{/jobs/job/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@34645867{/jobs/job,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@94f6bfb{/jobs/json,null,UNAVAILABLE}
18/06/12 19:07:10 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@35fe2125{/jobs,null,UNAVAILABLE}
18/06/12 19:07:10 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.3.69:4040
18/06/12 19:07:10 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/06/12 19:07:10 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
18/06/12 19:07:10 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/06/12 19:07:10 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
18/06/12 19:07:10 INFO cluster.YarnClientSchedulerBackend: Stopped
18/06/12 19:07:10 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/06/12 19:07:10 ERROR util.Utils: Uncaught exception in thread main
java.lang.NullPointerException
        at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)
        at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1360)
        at org.apache.spark.SparkEnv.stop(SparkEnv.scala:87)
        at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1797)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1290)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:1796)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/06/12 19:07:10 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/06/12 19:07:10 INFO storage.DiskBlockManager: Shutdown hook called
18/06/12 19:07:10 INFO util.ShutdownHookManager: Shutdown hook called
18/06/12 19:07:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-384934c7-8475-4158-9058-3a862173524d
18/06/12 19:07:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-384934c7-8475-4158-9058-3a862173524d/userFiles-b879b5ae-c200-4e4c-86b6-4ee9bfb3f08f
Nodemanager上的日志:
2018-06-12 19:07:02,060 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1528797280853_0008_000001 (auth:SIMPLE)
2018-06-12 19:07:02,069 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1528797280   853_0008_01_000001 by user root
2018-06-12 19:07:02,069 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference f   or app application_1528797280853_0008
2018-06-12 19:07:02,070 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root IP=192.168.3.69 OPERATION=Start Container Request  T   ARGET=ContainerManageImpl       RESULT=SUCCESS  APPID=application_1528797280853_0008    CONTAINERID=container_1528797280853_0008_01_000001
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1528797   280853_0008 transitioned from NEW to INITING
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Adding container_1528797280853_   0008_01_000001 to application application_1528797280853_0008
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1528797   280853_0008 transitioned from INITING to RUNNING
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1528797280853_0   008_01_000001 transitioned from NEW to LOCALIZING
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_   1528797280853_0008
2018-06-12 19:07:02,071 INFO org.apache.spark.network.yarn.YarnShuffleService: Initializing container container_1528797280853_0008_01_000001
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.3.69:90   00/user/root/.sparkStaging/application_1528797280853_0008/__spark_conf__.zip transitioned from INIT to DOWNLOADING
2018-06-12 19:07:02,071 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for    container_1528797280853_0008_01_000001
2018-06-12 19:07:02,073 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials t   o the nmPrivate file /tmp/hadoop/nm-local-dir/nmPrivate/container_1528797280853_0008_01_000001.tokens. Credentials list:
2018-06-12 19:07:02,086 INFO org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor: Initializing user root
2018-06-12 19:07:02,088 INFO org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor: Copying from /tmp/hadoop/nm-local-dir/nmPrivate/cont   ainer_1528797280853_0008_01_000001.tokens to /tmp/hadoop/nm-local-dir/usercache/root/appcache/application_1528797280853_0008/container_1528797280853   _0008_01_000001.tokens
2018-06-12 19:07:02,088 INFO org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor: CWD set to /tmp/hadoop/nm-local-dir/usercache/root/a   ppcache/application_1528797280853_0008 = file:/tmp/hadoop/nm-local-dir/usercache/root/appcache/application_1528797280853_0008
2018-06-12 19:07:02,153 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.3.69:90   00/user/root/.sparkStaging/application_1528797280853_0008/__spark_conf__.zip(->/tmp/hadoop/nm-local-dir/usercache/root/filecache/17/__spark_conf__.z   ip) transitioned from DOWNLOADING to LOCALIZED
2018-06-12 19:07:02,154 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1528797280853_0   008_01_000001 transitioned from LOCALIZING to LOCALIZED
2018-06-12 19:07:02,174 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1528797280853_0   008_01_000001 transitioned from LOCALIZED to RUNNING
2018-06-12 19:07:03,265 WARN org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor: Exit code from container container_1528797280853_000   8_01_000001 is : 1
2018-06-12 19:07:03,265 WARN org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor: Exception from container-launch with container ID: c   ontainer_1528797280853_0008_01_000001 and exit code: 1
ExitCodeException exitCode=1: Error: No such object: container_1528797280853_0008_01_000001

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
        at org.apache.hadoop.util.Shell.run(Shell.java:456)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
        at org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor.launchContainer(DockerContainerExecutor.java:244)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2018-06-12 19:07:03,266 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
2018-06-12 19:07:03,266 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container exited with a non-zero e   xit code 1
2018-06-12 19:07:03,266 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1528797280853_0   008_01_000001 transitioned from RUNNING to EXITED_WITH_FAILURE
2018-06-12 19:07:03,266 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_15   28797280853_0008_01_000001
2018-06-12 19:07:03,351 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring    for container_1528797280853_0008_01_000001
2018-06-12 19:07:05,371 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Could not get pid for container_15   28797280853_0008_01_000001. Waited for 2000 ms.
2018-06-12 19:07:05,427 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root OPERATION=Container Finished - Failed   TARGET=Conta   inerImpl        RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE    APPID=application_1528797280853_0008    CONTAINERID=   container_1528797280853_0008_01_000001
2018-06-12 19:07:05,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1528797280853_0   008_01_000001 transitioned from EXITED_WITH_FAILURE to DONE
2018-06-12 19:07:05,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Removing container_152879728085   3_0008_01_000001 from application application_1528797280853_0008
2018-06-12 19:07:05,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_   1528797280853_0008
2018-06-12 19:07:05,427 INFO org.apache.spark.network.yarn.YarnShuffleService: Stopping container container_1528797280853_0008_01_000001
2018-06-12 19:07:06,352 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring    for container_1528797280853_0008_01_000001
2018-06-12 19:07:07,052 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [contain   er_1528797280853_0008_01_000001]
2018-06-12 19:07:11,056 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1528797   280853_0008 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2018-06-12 19:07:11,056 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId applicatio   n_1528797280853_0008
2018-06-12 19:07:11,056 INFO org.apache.spark.network.yarn.YarnShuffleService: Stopping application application_1528797280853_0008
2018-06-12 19:07:11,056 INFO org.apache.spark.network.shuffle.ExternalShuffleBlockResolver: Application application_1528797280853_0008 removed, clea   nupLocalDirs = false
2018-06-12 19:07:11,057 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1528797   280853_0008 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2018-06-12 19:07:11,057 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion    for application: application_1528797280853_0008, with delay of 10800 seconds
这个问题纠结了很久都没解决,网上也没有发现一样的问题,希望各位大牛帮帮忙,谢谢!

展开
收起
kun坤 2020-05-29 23:06:16 678 0
1 条回答
写回答
取消 提交回答
  • ApplicationMaster host: N/A          ApplicationMaster RPC port: -1          queue: default          start time: 1528801621673          final status: FAILED          tracking URL: http://de69:10050/cluster/app/application_1528797280853_0008          user: root   无法得到host?建议再多看看日志,信息有点少######不好意思,我是新手不太了解,麻烦告知还需要看那些日志,谢谢。

    2020-05-29 23:06:30
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
应用 Docker 进行持续交付:用技术改变交付路程 立即下载
从Docker到容器服务 立即下载
构建基因数据应用生态系统—— docker in Bio/informatics 立即下载