开发者社区 问答 正文

E-MapReduce软件配置是什么?



软件配置的作用


Hadoop、Hive、Pig 等软件含有大量的配置,当需要对其软件配置进行修改时,就可以使用软件配置功能来实现。例如,HDFS 服务器的服务线程数目dfs.namenode.handler.count 默认是 10,假设要加大到 50;HDFS 的文件块的大小 dfs.blocksize 默认是128MB,假设系统都是小文件,想要改小到 64MB。
目前这个操作只能在集群启动的时候执行一次。

[font='iconfont'] 如何使用


  1. 登录阿里云E-MapReduce 控制台集群列表

  2. 在上方选择所在的地域(Region),所创建集群将会在对应的Region内。

  3. 单击创建集群,即会进入创建集群的操作界面。

  4. 在创建集群的软件配置这一步中可以看到所有包含的软件以及对应的版本。若想修改集群的配置,可以通过软件配置(可选)框选择相应的 json格式配置文件,对集群的默认参数进行覆盖或添加。json 文件的样例内容如下{
  5. "configurations": [
  6.      {
  7.          "classification": "core-site",
  8.          "properties": {
  9.              "fs.trash.interval": "61"
  10.          }
  11.      },
  12.      {
  13.          "classification": "hadoop-log4j",
  14.          "properties": {
  15.              "hadoop.log.file": "hadoop1.log",
  16.              "hadoop.root.logger": "INFO",
  17.              "a.b.c": "ABC"
  18.          }
  19.      },
  20.      {
  21.          "classification": "hdfs-site",
  22.          "properties": {
  23.              "dfs.namenode.handler.count": "12"
  24.          }
  25.      },
  26.      {
  27.          "classification": "mapred-site",
  28.          "properties": {
  29.              "mapreduce.task.io.sort.mb": "201"
  30.          }
  31.      },
  32.      {
  33.          "classification": "yarn-site",
  34.          "properties": {
  35.              "hadoop.security.groups.cache.secs": "251",
  36.              "yarn.nodemanager.remote-app-log-dir": "/tmp/logs1"
  37.          }
  38.      },
  39.      {
  40.          "classification": "httpsfs-site",
  41.          "properties": {
  42.              "a.b.c.d": "200"
  43.          }
  44.      },
  45.      {
  46.          "classification": "capacity-scheduler",
  47.          "properties": {
  48.              "yarn.scheduler.capacity.maximum-am-resource-percent": "0.2"
  49.          }
  50.      },
  51.      {
  52.          "classification": "hadoop-env",
  53.          "properties": {
  54.              "BC":"CD"
  55.          },
  56.          "configurations":[
  57.              {
  58.                  "classification":"export",
  59.                  "properties": {
  60.                      "AB":"${BC}",
  61.                      "HADOOP_CLIENT_OPTS":"\"-Xmx512m -Xms512m $HADOOP_CLIENT_OPTS\""
  62.                  }
  63.              }
  64.          ]
  65.      },
  66.      {
  67.          "classification": "httpfs-env",
  68.          "properties": {
  69.          },
  70.          "configurations":[
  71.              {
  72.                  "classification":"export",
  73.                  "properties": {
  74.                      "HTTPFS_SSL_KEYSTORE_PASS":"passwd"
  75.                  }
  76.              }
  77.          ]
  78.      },
  79.      {
  80.          "classification": "mapred-env",
  81.          "properties": {
  82.          },
  83.          "configurations":[
  84.              {
  85.                  "classification":"export",
  86.                  "properties": {
  87.                      "HADOOP_JOB_HISTORYSERVER_HEAPSIZE":"1001"
  88.                  }
  89.              }
  90.          ]
  91.      },
  92.      {
  93.          "classification": "yarn-env",
  94.          "properties": {
  95.          },
  96.          "configurations":[
  97.              {
  98.                  "classification":"export",
  99.                  "properties": {
  100.                      "HADOOP_YARN_USER":"${HADOOP_YARN_USER:-yarn1}"
  101.                  }
  102.              }
  103.          ]
  104.      },
  105.      {
  106.          "classification": "pig",
  107.          "properties": {
  108.              "pig.tez.auto.parallelism": "false"
  109.          }
  110.      },
  111.      {
  112.          "classification": "pig-log4j",
  113.          "properties": {
  114.              "log4j.logger.org.apache.pig": "error, A"
  115.          }
  116.      },
  117.      {
  118.          "classification": "hive-env",
  119.          "properties": {
  120.              "BC":"CD"
  121.          },
  122.          "configurations":[
  123.              {
  124.                  "classification":"export",
  125.                  "properties": {
  126.                      "AB":"${BC}",
  127.                      "HADOOP_CLIENT_OPTS1":"\"-Xmx512m -Xms512m $HADOOP_CLIENT_OPTS1\""
  128.                  }
  129.              }
  130.          ]
  131.      },
  132.      {
  133.          "classification": "hive-site",
  134.          "properties": {
  135.              "hive.tez.java.opts": "-Xmx3900m"
  136.          }
  137.      },
  138.      {
  139.          "classification": "hive-exec-log4j",
  140.          "properties": {
  141.              "log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO": "INFO,FA"
  142.          }
  143.      },
  144.      {
  145.          "classification": "hive-log4j",
  146.          "properties": {
  147.              "log4j.logger.org.apache.zookeeper.server.NIOServerCnxn": "INFO,DRFA"
  148.          }
  149.      }
  150. ]
  151. }

classification 参数指定要修改的配置文件,properties 参数放置要修改的 key value 键值对,默认配置文件有对应的 key有则只覆盖 value,没有则添加对应的 key value 键值对。
配置文件与 classification 的对应关系如下列表格所示:
Hadoop
Filenameclassification
core-site.xmlcore-site
log4j.propertieshadoop-log4j
hdfs-site.xmlhdfs-site
mapred-site.xmlmapred-site
yarn-site.xmlyarn-site
httpsfs-site.xmlhttpsfs-site
capacity-scheduler.xmlcapacity-scheduler
hadoop-env.shhadoop-env
httpfs-env.shhttpfs-env
mapred-env.shmapred-env
yarn-env.shyarn-env

Pig
Filenameclassification
pig.propertiespig
log4j.propertiespig-log4j

Hive
Filenameclassification
hive-env.shhive-env
hive-site.xmlhive-site
hive-exec-log4j.propertieshive-exec-log4j
hive-log4j.propertieshive-log4j

core-site 这类扁平的 xml 文件只有一层,配置都放在 properties 里。而 hadoop-en v这类 sh文件可能有两层结构,可以通过嵌套 configurations 的方式来设置,请参见示例里 hadoop-env 的部分,为 export 的HADOOP_CLIENT_OPTS 属性添加了 -Xmx512m -Xms512m 的设置。
设置好后,确认后单击 下一步

展开
收起
nicenelly 2017-10-30 14:31:48 1558 分享 版权
0 条回答
写回答
取消 提交回答