柏辰爸爸 2016-09-28 1268浏览量
为了增加Hadoop CGroup来限制部分作业对CPU的占用,近日在搭建Hadoop-2.6.0的CGroup过程中,碰到了以下一个问题,提示如下(注:500为用户hadoop的id号):
File /opt/hadoop/hadoop-2.6.0/etc/hadoop/container -executor .cfg must be owned by root, but is owned by 500
|
于是,就将container-executor.cfg文件的所有者修改为root,继而又出现了以下问题:
File /opt/hadoop/hadoop-2.6.0/etc/hadoop must be owned by root, but is owned by 500 |
这是怎么回事呢?
原来,LinuxContainerExecutor通过container-executor来启动容器,但是出于安全的考虑,要求其所依赖的配置文件container-executor.cfg及其各级父路径所有者必须是root用户。源码中的判断如下:
/** * Ensure that the configuration file and all of the containing directories * are only writable by root. Otherwise, an attacker can change the * configuration and potentially cause damage. * returns 0 if permissions are ok */ int check_configuration_permissions(const char* file_name) { // copy the input so that we can modify it with dirname char* dir = strdup(file_name); char* buffer = dir; do { if (!is_only_root_writable(dir)) { free(buffer); return -1; } dir = dirname(dir); } while (strcmp(dir, "/") != 0); free(buffer); return 0; } /** * Is the file/directory only writable by root. * Returns 1 if true */ static int is_only_root_writable(const char *file) { struct stat file_stat; if (stat(file, &file_stat) != 0) { fprintf(ERRORFILE, "Can't stat file %s - %s\n", file, strerror(errno)); return 0; } if (file_stat.st_uid != 0) { fprintf(ERRORFILE, "File %s must be owned by root, but is owned by %d\n", file, file_stat.st_uid); return 0; } if ((file_stat.st_mode & (S_IWGRP | S_IWOTH)) != 0) { fprintf(ERRORFILE, "File %s must not be world or group writable, but is %03o\n", file, file_stat.st_mode & (~S_IFMT)); return 0; } return 1; }
在check_configuration_permissions函数中,对配置文件container-executor.cfg路径dir,在一个do循环内循环调用is_only_root_writable函数检测及所有者必须是root用户,否则不予启动容器。
而Hadoop-2.6.0在编译时,是通过如下方式确定配置文件container-executor.cfg的位置的,首先,在hadoop-yarn-server-nodemanager的pom.xml中,设置了一个名为container-executor.conf.dir的properties,其值为yarn.basedir/etc/hadoop,实际上就是$HADOOP_HOME/etc/hadoop/,如下:
< properties >
<!-- Basedir eeded for generating FindBugs warnings using parent pom -->
< yarn.basedir >${project.parent.parent.basedir}</ yarn.basedir >
< container-executor.conf.dir >../etc/hadoop</ container-executor.conf.dir >
< container-executor.additional_cflags ></ container-executor.additional_cflags >
</ properties >
|
而在cmake的编译时用到了这个路径,将其赋值给DHADOOP_CONF_DIR并传入cmake编译环境,如下:
< exec executable = "cmake" dir = "${project.build.directory}/native" failonerror = "true" >
< arg line = "${basedir}/src/ -DHADOOP_CONF_DIR=${container-executor.conf.dir} -DJVM_ARCH_DATA_MODEL=${sun.arch.data.model}" />
< env key = "CFLAGS" value = "${container-executor.additional_cflags}" />
</ exec >
|
这就有一个问题,要么我们把$HADOOP_HOME各级父目录及其到container-executor.cfg的各级子目录设置其所有者为root,要么我们就得修改源码,重设路径,然后重新编译Hadoop-2.6.0,命令如下:
mvn package -Pdist ,native -DskipTests -Dtar -Dcontainer -executor .conf. dir =/etc/hadoop
|
这两种方案都不是很好,前者隐患太大,后者比较麻烦,需要maven、protobuf等环境。
通过cmake -DHADOOP_CONF_DIR=/etc/hadoop重新编译container-executor即可,步骤如下:
cd /tmp/lp_test
tar -zxf hadoop-2.6.0 -src .tar.gz
chown -R root:root hadoop-2.6.0 -src
cd /tmp/lp_test/hadoop-2.6.0 -src /hadoop -yarn -project /hadoop -yarn /hadoop -yarn -server /hadoop -yarn -server -nodemanager /
cmake src -DHADOOP_CONF_DIR=/etc/hadoop make |
cd targe/usr/local/bin/即可获得需要的container-executor文件。
ps:由于之前一直没用过c,直到写这篇文章之前的解决方案还是直接修改的c源码,写死了配置文件路径,然后通过cmake src直接编译得到的container-executor。。。在写这篇文章总结时,才突然完全明白container-executor的生成过程,看来写写文章,总结总结还是非常好的!
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
大数据计算实践乐园,近距离学习前沿技术