mapreduce出现大量task被KILLED_UNCLEAN的3个原因

简介: Request received to kill task 'attempt_201411191723_2827635_r_000009_0' by user-------Task has been KILLED_UNCLEAN by the user原因如下:1.
Request received to kill task 'attempt_201411191723_2827635_r_000009_0' by user
-------
Task has been KILLED_UNCLEAN by the user

原因如下:
1.An impatient user (armed with "mapred job -kill-task" command)
2.JobTracker (to kill a speculative duplicate, or when a whole job fails)
3.Fair Scheduler (but diplomatically, it calls it “preemption”)


一篇老外的文章说的更详细:

This is one of the most bloodcurling (and my favorites) stories, that we have recently seen in our 190-square-meter Hadoopland. In a nutshell, some jobs were surprisingly running extremely long, because thousands of their tasks were constantly being killed for some unknown reasons by someone (or something).

For example, a photo, taken by our detectives, shows a job running for 12hrs:20min that spawned around 13,000 tasks until that moment. However (only) 4,118 of map tasks had finished successfully, while 8,708 were killed (!) and … surprisingly only 1 task failed (?) – obviously spreading panic in the Hadoopland.

When murdering, the killer was leaving the same message each time: "KILLED_UNCLEAN by the user" (however, even our uncle competitor Google does not know too much what it exactly means ;)). Who is “the user”? Does the killer want to impersonate someone?

More Traces Of Crime

The detectives started looking for more traces of crime. They have noticed the killed tasks belong to ad-hoc Hive queries which are quite resource-intensive. When looking at timestamps in log files from JobTracker, TaskTracker and map tasks, they figured out that JobTracker got a request to murder the tasks…

They have also noticed that tasks were usually killed young, quickly after the start (within 6-16 minutes), while the surviving tasks are running fine long hours.. The killer is unscrupulous!

Killer’s Identity

Who can actually send a kill request to JobTracker to murder thousands of tasks? Detectives quickly selected there main candidates:

  • An impatient user (armed with "mapred job -kill-task" command)
  • JobTracker (to kill a speculative duplicate, or when a whole job fails)
  • Fair Scheduler (but diplomatically, it calls it “preemption”)

When looking at log messages saying that a task is "KILLED UNCLEAN by the user", one could think that some user is a prime candidate to be the serial killer. However, the citizens of our Hadoopland are friendly, patient and respective to others, so that it would be unfair to assume that somebody killed, in cold blood, 8,708 tasks from a single jobs.

JobTracker also seems to have a good alibi, because the job itself had not failed yet and the speculative execution was disabled (surprisingly Hive has own setting, hive.mapred.reduce.tasks.speculative.execution, for disabling speculative execution for reduce tasks, which is not overwritten by Hadoop’s mapred.reduce.tasks.speculative.execution).

FairScheduler Accused

For some company-specific reasons, the ad-hoc Hive queries are running as hive user in our Hadoopland. Moreover FairScheduler is configured with the default value of mapred.fairscheduler.poolnameproperty (which is user.name), so that the pools are created dynamically based on the username of user submitting the job to the cluster (“hive” in case of our ad-hoc Hive queries).

When browsing one presentation about Hadoop 2 years ago, one of the detectives just remembered that FairScheduler is usually preempting the newest tasks in an over-share pool to forcibly make some room for starved pools.

Eureka! ;)

At this movement everything became clear and a quick look at FairScheduler webpage confirmed it. “Hive” pool was running over its minimum and fair shares for a long time, while the other pools are constantly running under their minimum and fair shares. In such a case, Fair Scheduler was killing Hive tasks from time to time to reassign slots to tasks from other pools.

Less Violence, More Peace

Having the evidence, we could put Fair Scheduler in prison, and use Capacity Scheduler instead. Maybe in the future, we will do that! Today, we believe that Fair Scheduler has not committed the crimes really intentionally – we feel that we have educated it badly and gave it too much power. Today, Fair Scheduler gets the suspended sentence – we want to give it a chance to rehabilitate and become more friendly and less aggressive…

How to dignify the personality of Fair Scheduler?

Obviously tuning settings like minSharePreemptionTimeout, fairSharePreemptionTimeout, minMaps and minReduces based on the current workload could be a good way to control the aggressiveness of the preemption of Fair Scheduler. Easier said, than done, because it requires a deep understanding of and knowledge about your workload (which later may change or not).

There is a setting called mapred.fairscheduler.preemption that disables or enables preemption. However disabling preemption (or rather killing, to be precise), in our case, would just partially solve the problem. Only partially, because this issue exposed another problem in the Hadoopland – ad-hoc Hive queries are overloading the cluster.. Finally, we have not disabled preemption, because we were worrying a bit about SLA not being enforced without “any” preemption.

Having this said, the two problems to solve are:

  • stop mass killing Hive tasks
  • stop overloading the cluster by ad-hoc Hive queries

We simply limited the number of map and reduce tasks that Fair Scheduler can run in Hive pool (by setting maxMaps and maxReduces for that pool). In consequence, Hive pool could not contain too many task, so that Fair Scheduler could not kill too many of them ;) (because Hive pool’s will not be operating (too much) above its min and fair share level). Limiting the number of tasks prevents also from overloading the cluster by Hive queries (additionally one could also set the maximum number of concurrent jobs running in Hive pool using maxRunningJobs).

A nice thing to say is that Fair Scheduler is eager to cooperate, because changing the FairScheduler’s allocation file, does not require restarting of JobTracker. This file is automatically polled for changes every 10 seconds and if it has changed, it is reloaded and the pool configurations are updated on the fly. Thanks to that you can easily learn and change the personality of Fair Scheduler better. ;)

No related posts found.

相关实践学习
AnalyticDB PostgreSQL 企业智能数据中台:一站式管理数据服务资产
企业在数据仓库之上可构建丰富的数据服务用以支持数据应用及业务场景;ADB PG推出全新企业智能数据平台,用以帮助用户一站式的管理企业数据服务资产,包括创建, 管理,探索, 监控等; 助力企业在现有平台之上快速构建起数据服务资产体系
目录
相关文章
|
分布式计算 Java API
阿里云E-MapReduce集群不同计算引擎sleep task使用笔记
需求:日常在E-MapReduce集群中进行相关测试,验证一些切换或变更是否会影响业务的运行导致任务failed。所以需要在测试集群中运行指定资源数(vcore及memory)或者指定运行时间的任务。 目前用到MapReduce和spark任务两种,其余的持续更新补充中……
|
资源调度 分布式计算 调度
Yarn源码分析之MapReduce作业中任务Task调度整体流程(一)
        v2版本的MapReduce作业中,作业JOB_SETUP_COMPLETED事件的发生,即作业SETUP阶段完成事件,会触发作业由SETUP状态转换到RUNNING状态,而作业状态转换中涉及作业信息的处理,是由SetupCompletedTransition来完成的,它主要做了...
1254 0
|
存储 分布式计算
MapReduce源码分析之Task中关于对应TaskAttempt存储Map方案的一些思考
        我们知道,MapReduce有三层调度模型,即Job——>Task——>TaskAttempt,并且:         1、通常一个Job存在多个Task,这些Task总共有Map Task和Redcue Task两种大的类型(为简化描述,Map-Only作业、JobSetup Task等复杂的情况这里不做考虑);         2、每个Task可以尝试运行1-n此,而且通常很多情况下都是1次,只有当开启了推测执行原理且存在拖后腿Task,或者Task之前执行失败时,Task才执行多次。
1256 0
|
分布式计算 Hadoop
Hadoop系列 mapreduce 原理分析
Hadoop系列 mapreduce 原理分析
222 1
|
分布式计算 资源调度 Hadoop
Hadoop-10-HDFS集群 Java实现MapReduce WordCount计算 Hadoop序列化 编写Mapper和Reducer和Driver 附带POM 详细代码 图文等内容
Hadoop-10-HDFS集群 Java实现MapReduce WordCount计算 Hadoop序列化 编写Mapper和Reducer和Driver 附带POM 详细代码 图文等内容
324 3
|
分布式计算 Hadoop Java
Hadoop MapReduce编程
该教程指导编写Hadoop MapReduce程序处理天气数据。任务包括计算每个城市ID的最高、最低气温、气温出现次数和平均气温。在读取数据时需忽略表头,且数据应为整数。教程中提供了环境变量设置、Java编译、jar包创建及MapReduce执行的步骤说明,但假设读者已具备基础操作技能。此外,还提到一个扩展练习,通过分区功能将具有相同尾数的数字分组到不同文件。
203 1
|
数据采集 SQL 分布式计算
|
分布式计算 Hadoop Java
Hadoop MapReduce 调优参数
对于 Hadoop v3.1.3,针对三台4核4G服务器的MapReduce调优参数包括:`mapreduce.reduce.shuffle.parallelcopies`设为10以加速Shuffle,`mapreduce.reduce.shuffle.input.buffer.percent`和`mapreduce.reduce.shuffle.merge.percent`分别设为0.8以减少磁盘IO。
304 1