-bash: bin/hadoop: No such file or directory
是说bin/hadoop文件不存在,你切换到hadoop的目录下再执行
-bash: bin/start-all.ssh: No such file or directory
同样是文件不存在,应该是start-all.sh 不是.ssh######
非常感谢,上面的问题解决了,但又有问题出现了。
1 下面是运行了ghmin@ubuntu :~/hadoop$ bin/hadoop namenode -format后的情况
12/08/01 12:38:14 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
************************************************************/
12/08/01 12:38:14 INFO util.GSet: VM type = 32-bit
12/08/01 12:38:14 INFO util.GSet: 2% max memory = 19.33375 MB
12/08/01 12:38:14 INFO util.GSet: capacity = 2^22 = 4194304 entries
12/08/01 12:38:14 INFO util.GSet: recommended=4194304, actual=4194304
12/08/01 12:38:14 INFO namenode.FSNamesystem: fsOwner=ghmin
12/08/01 12:38:14 INFO namenode.FSNamesystem: supergroup=supergroup
12/08/01 12:38:14 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/08/01 12:38:14 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/08/01 12:38:14 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/08/01 12:38:14 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/08/01 12:38:14 ERROR namenode.NameNode: java.io.IOException: Cannot create directory /path/to/your/hadoop/hdfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:297)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1317)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1336)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
12/08/01 12:38:14 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
2 下面是运行了ghmin
@ubuntu :~/hadoop$ bin/start-all.sh后的情况,到这一步就有出现问题了,改了半天权限竟然还是这样,在hadoop@ubuntu下也运行了,结果是一样的,还望大家指教,谢谢!
starting namenode, logging to /home/ghmin/hadoop/libexec/../logs/hadoop-ghmin-namenode-ubuntu.out
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: /home/ghmin/hadoop/bin/slaves.sh: Permission denied
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: exec: /home/ghmin/hadoop/bin/slaves.sh: cannot execute: Permission denied
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: /home/ghmin/hadoop/bin/slaves.sh: Permission denied
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: exec: /home/ghmin/hadoop/bin/slaves.sh: cannot execute: Permission denied
jobtracker running as process 5862. Stop it first.
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: /home/ghmin/hadoop/bin/slaves.sh: Permission denied
/home/ghmin/hadoop/bin/hadoop-daemons.sh: line 38: exec: /home/ghmin/hadoop/bin/slaves.sh: cannot execute: Permission denied
######要在bin下才能执行,说明环境变量配置得不是很好,下面的问题,仅需用在root用户下将bin下所有脚本的执行权限赋予用户~比如执行:chmod u+x *