EMR集群上capacity scheduler的ACL实现

本文涉及的产品
EMR Serverless StarRocks,5000CU*H 48000GB*H
简介: 本文接着yarn capacity scheduler的实现原理,介绍了capacity scheduler的设置和其中需要注意的问题,并结合EMR集群做了实际操作实验。

背景

前面一篇介绍了yarn的capacity scheduler原理,实验了在EMR集群上使用capacity scheduler对集群资源的隔离和quota的限制。本文会介绍EMR集群上capacity scheduler的ACL实现。

为什么要做这个?前面给集群分配的资源分配了多个队列,以及每个队列的资源配比和作业调度的优先级。如果多租户里面的每个都按照约定,各自往自己对应的队列里面提交作业,自然没有问题。但是如果用户熟悉capacity scheduler的操作和原理,也是可以占用别组的资源队列。所有有了capacity scheduler的ACL设置。

关键参数

  • yarn.scheduler.capacity.queue-mappings

    • 指定用户和queue的映射关系。默认用户上来,不用指定queue参数就能直接到对应的queue。这个比较方便,参数的格式为:[u|g]:[name]:[queue_name][,next mapping]*
  • yarn.scheduler.capacity.root.{queue-path}.acl_administer_queue

    • 指定谁能管理这个队列里面的job,英文解释为The ACL of who can administer jobs on the default queue. 星号*表示all,一个空格表示none;
  • yarn.scheduler.capacity.root.{queue-path}.acl_submit_applications

    • 指定谁能提交job到这个队列,英文解释是The ACL of who can administer jobs on the queue.星号*表示all,一个空格表示none;

EMR集群上具体操作步骤

  • 创建EMR集群
  • 修改相关配置来支持queue acl

    • yarn-site: yarn.acl.enable=true
    • mapred-site: mapreduce.cluster.acls.enabled=true
    • hdfs-site: dfs.permissions.enabled=true这个跟capacity scheduler queue的acl没什么关系,是控制hdfs acl的,这里一并设置了
    • hdfs-site: mapreduce.job.acl-view-job=* 如果配置了dfs.permissions.enabled=true,就需要配置一下这个,要不然在hadoop ui上面没发查看job信息
  • 重启yarn和hdfs,使配置生效(root账户)

    • su -l hdfs -c '/usr/lib/hadoop-current/sbin/stop-dfs.sh'
    • su -l hadoop -c '/usr/lib/hadoop-current/sbin/stop-yarn.sh'
    • su -l hdfs -c '/usr/lib/hadoop-current/sbin/start-dfs.sh'
    • su -l hadoop -c '/usr/lib/hadoop-current/sbin/start-yarn.sh'
    • su -l hadoop -c '/usr/lib/hadoop-current/sbin/yarn-daemon.sh start proxyserver'
  • 修改capacity scheduler配置
    完整配置
<configuration>
  <property>
    <name>yarn.scheduler.capacity.maximum-applications</name>
    <value>10000</value>
    <description>
      Maximum number of applications that can be pending and running.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
    <value>0.25</value>
    <description>
      Maximum percent of resources in the cluster which can be used to run
      application masters i.e. controls number of concurrent running
      applications.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
    <description>
      The ResourceCalculator implementation to be used to compare
      Resources in the scheduler.
      The default i.e. DefaultResourceCalculator only uses Memory while
      DominantResourceCalculator uses dominant-resource to compare
      multi-dimensional resources such as Memory, CPU etc.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>a,b,default</value>
    <description>
      The queues at the this level (root is the root queue).
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>20</value>
    <description>Default queue target capacity.</description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.a.capacity</name>
    <value>30</value>
    <description>Default queue target capacity.</description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.b.capacity</name>
    <value>50</value>
    <description>Default queue target capacity.</description>
  </property>


  <property>
    <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
    <value>1</value>
    <description>
      Default queue user limit a percentage from 0.0 to 1.0.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>100</value>
    <description>
      The maximum capacity of the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.a.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.b.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>


  <property>
    <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
     <value> </value>
     <description>
       The ACL of who can submit jobs to the root queue.
     </description>
   </property>

  <property>
    <name>yarn.scheduler.capacity.root.a.acl_submit_applications</name>
    <value>root</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.b.acl_submit_applications</name>
    <value>hadoop</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
    <value>root</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

<property>
    <name>yarn.scheduler.capacity.root.acl_administer_queue</name>
    <value> </value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
    <value>root</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.a.acl_administer_queue</name>
    <value>root</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.b.acl_administer_queue</name>
    <value>root</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.node-locality-delay</name>
    <value>40</value>
    <description>
      Number of missed scheduling opportunities after which the CapacityScheduler
      attempts to schedule rack-local containers.
      Typically this should be set to number of nodes in the cluster, By default is setting
      approximately number of nodes in one rack which is 40.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings</name>
    <value>u:hadoop:b,u:root:a</value>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings-override.enable</name>
    <value>false</value>
    <description>
      If a queue mapping is present, will it override the value specified
      by the user? This can be used by administrators to place jobs in queues
      that are different than the one specified by the user.
      The default is false.
    </description>
  </property>

</configuration>

上面的配置,分配了三个队列和对应的资源配比,设置用户hadoop默认(不指定队列的时候)往b队列提,root默认往a队列提。同时hadoop只能往b队列提交作业,root可以往所有队列提交作业。其它用户没有权限提交作业。

踩过的坑

  • acl_administer_queue的配置

    • 配置中支持两种操作的acl权限配置acl_administer_queueacl_submit_applications。按照语意,如果要控制是否能提交作业,只要配置队列的acl_submit_applications属性即可,按照文档,也就是这个意思。但是其实不是的,只要有administer权限的,就能提交作业。这个问题查了好久,找源码才找到。
  @Override
  public void submitApplication(ApplicationId applicationId, String userName,
      String queue)  throws AccessControlException {
    // Careful! Locking order is important!

    // Check queue ACLs
    UserGroupInformation userUgi = UserGroupInformation.createRemoteUser(userName);
    if (!hasAccess(QueueACL.SUBMIT_APPLICATIONS, userUgi)
        && !hasAccess(QueueACL.ADMINISTER_QUEUE, userUgi)) {
      throw new AccessControlException("User " + userName + " cannot submit" +
          " applications to queue " + getQueuePath());
    }
  • root queue的配置

    • 如果要限制用户对queue的权限root queue一定要设置,不能只设置leaf queue。因为权限是根权限具有更高的优先级,看代码注释说:// recursively look up the queue to see if parent queue has the permission。这个跟常人理解也不一样。所以需要先把把的权限限制住,要不然配置的各种自队列的权限根本没有用。
<property>
    <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
     <value> </value>
     <description>
       The ACL of who can submit jobs to the root queue.
     </description>
   </property>
相关实践学习
基于EMR Serverless StarRocks一键玩转世界杯
基于StarRocks构建极速统一OLAP平台
快速掌握阿里云 E-MapReduce
E-MapReduce 是构建于阿里云 ECS 弹性虚拟机之上,利用开源大数据生态系统,包括 Hadoop、Spark、HBase,为用户提供集群、作业、数据等管理的一站式大数据处理分析服务。 本课程主要介绍阿里云 E-MapReduce 的使用方法。
目录
相关文章
|
7月前
|
Kubernetes 安全 Docker
如何在 K8S 集群范围使用 imagePullSecret?
如何在 K8S 集群范围使用 imagePullSecret?
|
3月前
|
分布式计算 资源调度 Hadoop
Hadoop YARN资源管理-容量调度器(Yahoo!的Capacity Scheduler)
详细讲解了Hadoop YARN资源管理中的容量调度器(Yahoo!的Capacity Scheduler),包括队列和子队列的概念、Apache Hadoop的容量调度器默认队列、队列的命名规则、分层队列、容量保证、队列弹性、容量调度器的元素、集群如何分配资源、限制用户容量、限制应用程序数量、抢占申请、启用容量调度器以及队列状态管理等方面的内容。
97 3
|
4月前
|
Kubernetes 算法 调度
在k8S中,Scheduler使用哪两种算法将Pod绑定到worker节点?
在k8S中,Scheduler使用哪两种算法将Pod绑定到worker节点?
|
6月前
|
资源调度 分布式计算 安全
YARN的FIFO调度器和Capacity Scheduler调度器在资源分配上有何区别?
【6月更文挑战第20天】YARN的FIFO调度器和Capacity Scheduler调度器在资源分配上有何区别?
86 11
|
6月前
|
资源调度 分布式计算 安全
FIFO调度器和Capacity Scheduler调度器各自有哪些特点?
【6月更文挑战第20天】FIFO调度器和Capacity Scheduler调度器各自有哪些特点?
67 7
|
7月前
|
Linux 网络安全 调度
DolphinScheduler 调度工作流报错 Host key verification failed.
DolphinScheduler调度任务失败,错误显示&quot;Host key verification failed.&quot;。问题可能在于SSH免密登录配置失效或租户不存在于Linux系统中。解决方案:检查SSH配置并确保调度用户有管理员权限;确认DolphinScheduler租户与Linux用户对应。如果日志仅显示主机键验证失败,可能忽略了租户与操作系统用户的对应关系。创建具备管理员权限的新租户可解决。此外,当失败策略设为&quot;继续&quot;时,可能无法查看失败日志,建议使用&quot;结束&quot;策略。
174 0
|
资源调度 分布式计算 Hadoop
YARN Capacity Scheduler容量调度器(超详细解读)
YARN Capacity Scheduler容量调度器(超详细解读)
1122 0
|
数据库
This scheduler instance (XXXXX) is still active but was recovered by another
This scheduler instance (XXXXX) is still active but was recovered by another
252 0
|
数据库
This scheduler instance is still active but was recovered by another instance in the cluster
This scheduler instance is still active but was recovered by another instance in the cluster
911 0
|
Java 调度
Leetcode-Medium 621. Task Scheduler
Leetcode-Medium 621. Task Scheduler
118 0
Leetcode-Medium 621. Task Scheduler