dos 根据指定txt目录c 复制文件到指定目录位置

梦朝思夕 的BLOG
用户名:梦朝思夕
文章数:178
评论数:10
访问量:158400
注册日期:
阅读量:5863
阅读量:12276
阅读量:406615
阅读量:1095054
51CTO推荐博文
&08:08:54,179&ERROR&org.apache.hadoop.hdfs.server.namenode.FSNamesystem:&FSNamesystem&initialization&failed.
java.io.IOException:&NameNode&is&not&formatted.
at&org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:395)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:569)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
&08:08:54,199&ERROR&org.apache.hadoop.hdfs.server.namenode.NameNode:&java.io.IOException:&NameNode&is&not&formatted.
at&org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:395)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:569)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)解决方法:格式化NameNode[root@hadoop&bin]#&./hadoop&namenode&-format
13/06/28&22:50:23&INFO&namenode.NameNode:&STARTUP_MSG:&
/************************************************************
STARTUP_MSG:&Starting&NameNode
STARTUP_MSG:&&&host&=&java.net.UnknownHostException:&hadoop:&hadoop
STARTUP_MSG:&&&args&=&[-format]
STARTUP_MSG:&&&version&=&0.20.2
STARTUP_MSG:&&&build&=&https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20&-r&911707;&compiled&by&'chrisdo'&on&Fri&Feb&19&08:07:34&UTC&2010
************************************************************/
13/06/28&22:50:23&INFO&namenode.FSNamesystem:&fsOwner=root,root
13/06/28&22:50:23&INFO&namenode.FSNamesystem:&supergroup=supergroup
13/06/28&22:50:23&INFO&namenode.FSNamesystem:&isPermissionEnabled=true
13/06/28&22:50:38&INFO&metrics.MetricsUtil:&Unable&to&obtain&hostName
java.net.UnknownHostException:&hadoop:&hadoop
at&java.net.InetAddress.getLocalHost(InetAddress.java:1354)
at&org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
at&org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:73)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.&init&(FSDirectory.java:68)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:379)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:854)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:948)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
13/06/28&22:50:38&INFO&common.Storage:&Image&file&of&size&94&saved&in&0&seconds.
13/06/28&22:50:38&INFO&common.Storage:&Storage&directory&/tmp/hadoop-root/dfs/name&has&been&successfully&formatted.
13/06/28&22:50:38&INFO&namenode.NameNode:&SHUTDOWN_MSG:&
/************************************************************
SHUTDOWN_MSG:&Shutting&down&NameNode&at&java.net.UnknownHostException:&hadoop:&hadoop
************************************************************/问题:&08:02:32,476&INFO&org.apache.hadoop.mon.Storage:&Storage&directory&/tmp/hadoop-root/dfs/name&does&not&exist
&08:02:32,480&ERROR&org.apache.hadoop.hdfs.server.namenode.FSNamesystem:&FSNamesystem&initialization&failed.
org.apache.hadoop.mon.InconsistentFSStateException:&Directory&/tmp/hadoop-root/dfs/name&is&in&an&inconsistent&state:&storage&directory&does&not&exist&or&is&not&accessible.
at&org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:395)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:569)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
&08:02:32,494&ERROR&org.apache.hadoop.hdfs.server.namenode.NameNode:&org.apache.hadoop.mon.InconsistentFSStateException:&Directory&/tmp/hadoop-root/dfs/name&is&in&an&inconsistent&state:&storage&directory&does&not&exist&or&is&not&accessible.
at&org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at&org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at&org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:395)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:569)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at&org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
&08:02:32,495&INFO&org.apache.hadoop.hdfs.server.namenode.NameNode:&SHUTDOWN_MSG:&
/************************************************************
SHUTDOWN_MSG:&Shutting&down&NameNode&at&localhost/127.0.0.1
************************************************************/解决:mkidr&-pv&&/tmp/hadoop-root/dfs/name本文出自 “” 博客,请务必保留此出处
了这篇文章
类别:┆阅读(0)┆评论(0)Hadoop问题:启动hadoop时报namenode未初始化:java.io.IOException: NameNode is not formatted.1、启动Hadoop
ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh
starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out
localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out
starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out2、访问localhost:50070失败,说明namenode启动失败
3、查看namenode启动日志
ubuntu@ubuntu:~/hadoop-1.0.4/bin$ cd ../logs
ubuntu@ubuntu:~/hadoop-1.0.4/logs$ view hadoop-ubuntu-namenode-ubuntu.log
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:
host = ubuntu/127.0.1.1
STARTUP_MSG:
STARTUP_MSG:
version = 1.0.4
STARTUP_MSG:
build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct
3 05:13:58 UTC 2012
************************************************************/
07:05:46,936 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
07:05:47,053 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
07:05:47,058 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: VM type
07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: capacity
= 2^22 = 4194304 entries
07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
07:05:47,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
07:05:47,169 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
07:05:47,175 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.&init&(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)其中& 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.&一行显示,namenode未初始化。
4、初始化namenode,却提示是否重新初始化namenode,于是输入Y。
ubuntu@ubuntu:~/hadoop-1.0.4$ bin/hadoop namenode -format
13/01/24 07:05:08 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:
host = ubuntu/127.0.1.1
STARTUP_MSG:
args = [-format]
STARTUP_MSG:
version = 1.0.4
STARTUP_MSG:
build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct
3 05:13:58 UTC 2012
************************************************************/
Re-format filesystem in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name ? (Y or N) y
Format aborted in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name
13/01/24 07:05:12 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/5、初始化后,重新启动hadoop,localhost:50070仍然访问失败。查看namenode启动日志,依然报namenode没有初始化的错误。
6、于是删除core-site.xml配置文件中配置的tmp目录下的所有文件;
将hadoop所有服务停止;
再次启动hadoop,访问localhost:50070成功。
ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ rm -rf *
ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ cd ../bin
ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
no namenode to stop
localhost: stopping datanode
localhost: stopping secondarynamenode
ubuntu@ubuntu:~/hadoop-1.0.4/bin$ hadoop namenode -format
13/01/24 07:10:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:
host = ubuntu/127.0.1.1
STARTUP_MSG:
args = [-format]
STARTUP_MSG:
version = 1.0.4
STARTUP_MSG:
build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct
3 05:13:58 UTC 2012
************************************************************/
13/01/24 07:10:46 INFO util.GSet: VM type
13/01/24 07:10:46 INFO util.GSet: 2% max memory = 19.33375 MB
13/01/24 07:10:46 INFO util.GSet: capacity
= 2^22 = 4194304 entries
13/01/24 07:10:46 INFO util.GSet: recommended=4194304, actual=4194304
13/01/24 07:10:46 INFO namenode.FSNamesystem: fsOwner=ubuntu
13/01/24 07:10:46 INFO namenode.FSNamesystem: supergroup=supergroup
13/01/24 07:10:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/01/24 07:10:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/01/24 07:10:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/01/24 07:10:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/01/24 07:10:46 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/01/24 07:10:46 INFO common.Storage: Storage directory /home/ubuntu/hadoop-1.0.4/tmp/dfs/name has been successfully formatted.
13/01/24 07:10:46 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh
starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out
localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out
starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out
7、原来是修改了配置文件中的tmp目录后没有对hdfs做初始化,导致启动hadoop时报namenode没有初始化的错误。
随机推荐程序问答结果
如对文章有任何疑问请提交到,或者您对内容不满意,请您反馈给我们发贴求解。
,机器学习分类整理更新日期:: 22:02:33
如需转载,请注明文章出处和来源网址:
本文WWW.DOC100.NET DOC100.NET版权所有。Hadoop中datanode无法启动的解决办法之一_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
Hadoop中datanode无法启动的解决办法之一
上传于|0|0|文档简介
&&配置hadoop时,datanode无法启动的解决办法之一
阅读已结束,如果下载本文需要使用1下载券
想免费下载本文?
定制HR最喜欢的简历
你可能喜欢博客访问: 326525
博文数量: 215
博客积分: 2991
博客等级: 少校
技术积分: 2400
注册时间:
IT168企业级官微
微信号:IT168qiye
系统架构师大会
微信号:SACC2013
分类: 敏捷开发
原来在vmworkstation上部署了3台hadoop的分布式环境,运行一切正常。因为资源问题,我把其中一台的vm文件copy到另外一台实体机上,一共copy了3分,又打算部署一个同样的hadoop分布式环境。配置好ssh互相认证授权,机器建可以互相访问。所有配置都没有,除了机器名,相应改掉core-site.xml, mapred-site.xml里面的机器名称。执行start-all.sh,但namenode启动一会就挂了,报如下错误:
InterruptedException.java.lang.InterruptedException: sleep
interrupted 01:02:37,555 INFO
org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
Interrupted Monitorjava.lang.InterruptedException: sleep interrupted
& & & & at java.lang.Thread.sleep(Native Method)
& & & & at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.
& & & & at java.lang.Thread.run(Thread.java:619)
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.net.BindException:
&Cannot assign requested address
& & & & at sun.nio.ch.Net.bind(Native Method)
& & & & at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
& & & & at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
& & & & at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
& & & & at org.apache.hadoop.http.HttpServer.start(HttpServer.java:424)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
网上找了很多资料,有说hosts里面机器名不对,应该改成domain,或者拿掉localhost这一行,也有说防火墙没关闭等等。但尝试过都还是不行。
最后上一个外国网站发现说是因为dfs.http.address指向的IP有问题,指向了本地IP,而应该是非本地IP。我记得我自己并没有配置这个property,但还是
检查下hdfs-site.xml,发现之前我配置过这个属性,IP指向另一个IP,但我把这个属性用“#” comment掉了,怎么还会生效呢。我拿掉comment,把
IP改成我新配的namenode机器名。
& & & dfs.http.address
& &master35:50070
再重启,好了。搞了我很久的问题,但我还是很奇怪我已经comment掉了,应该不会生效的啊,神了。
阅读(337) | 评论(0) | 转发(0) |
相关热门文章
给主人留下些什么吧!~~
请登录后评论。后使用快捷导航没有帐号?
查看: 7781|回复: 9
namenode启动不起来
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
本帖最后由 vblvbl 于
02:08 编辑
格式化能通过,启动之后,通过jps发现只有namenode没有起来
namenode的日志文件报地址已经使用
10:00:19,016 INFO org.apache..hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:& &host = master/192.168.235.129
STARTUP_MSG:& &args = []
STARTUP_MSG:& &version = 0.20.2
STARTUP_MSG:& &build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
10:00:19,233 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: .net.BindException: Problem binding to master/192.168.235.129:9000 : Address already in use
& && &&&at org.apache.hadoop.ipc.Server.bind(Server.java:190)
& && &&&at org.apache.hadoop.ipc.Server$Listener.&init&(Server.java:253)
& && &&&at org.apache.hadoop.ipc.Server.&init&(Server.java:1026)
& && &&&at org.apache.hadoop.ipc.RPC$Server.&init&(RPC.java:488)
& && &&&at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:191)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:279)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
& && &&&at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
Caused by: java.net.BindException: Address already in use
& && &&&at sun.nio.ch.Net.bind(Native Method)
& && &&&at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
& && &&&at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
& && &&&at org.apache.hadoop.ipc.Server.bind(Server.java:188)
& && &&&... 8 more
& && &
10:00:19,272 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.235.129
************************************************************/
复制代码
论坛徽章:524
1 用netstat查看是哪个进程占用端口,并将其杀死,再试看下行不
2 配置namenode改个别的端口试试
比较好奇究竟是什么程序占用了
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
tigerfish 发表于
1 用netstat查看是哪个进程占用端口,并将其杀死,再试看下行不
2 配置namenode改个别的端口试试[hadoop@master root]$ lsof -i:9000
COMMAND& &PID& &USER& &FD& &TYPE DEVICE SIZE NODE NAME
java& & 10938 hadoop& &52u&&IPv6&&24312& && & TCP master:48160-&master:cslistener (ESTABLISHED)
java& & 12058 hadoop& &40u&&IPv6&&26785& && & TCP master:54098-&master:cslistener (ESTABLISHED)
[hadoop@master root]$ kill -9 10938
[hadoop@master root]$ kill -9 12058
[hadoop@master root]$ lsof -i:9000
[hadoop@master root]$ /home/hadoop/hadoop-0.20.2/bin/stop-all.sh
no jobtracker to stop
slave2: no tasktracker to stop
slave1: no tasktracker to stop
no namenode to stop
slave1: stopping datanode
slave2: stopping datanode
master: no secondarynamenode to stop
[hadoop@master root]$ /home/hadoop/hadoop-0.20.2/bin/hadoop namenode -format
13/02/27 10:17:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:& &host = master/192.168.235.129
STARTUP_MSG:& &args = [-format]
STARTUP_MSG:& &version = 0.20.2
STARTUP_MSG:& &build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /home/hadoop/hadoop_tmp/dfs/name ? (Y or N) Y
13/02/27 10:17:59 INFO namenode.FSNamesystem: fsOwner=hadoop,root
13/02/27 10:17:59 INFO namenode.FSNamesystem: supergroup=supergroup
13/02/27 10:17:59 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/02/27 10:17:59 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/02/27 10:17:59 INFO common.Storage: Storage directory /home/hadoop/hadoop_tmp/dfs/name has been successfully formatted.
13/02/27 10:17:59 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.235.129
************************************************************/
[hadoop@master root]$ /home/hadoop/hadoop-0.20.2/bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-slave2.out
master: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-master.out
starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-master.out
slave1: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-slave1.out
slave2: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-slave2.out
[hadoop@master root]$ /usr/java/jdk1.6.0_37/bin/jps
12805 Jps
12655 SecondaryNameNode
12721 JobTracker
[hadoop@master root]$
复制代码杀了 还是不行啊
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
tigerfish 发表于
1 用netstat查看是哪个进程占用端口,并将其杀死,再试看下行不
2 配置namenode改个别的端口试试
改成8090端口&&还是报错STARTUP_MSG: Starting NameNode
STARTUP_MSG:& &host = master/192.168.235.129
STARTUP_MSG:& &args = []
STARTUP_MSG:& &version = 0.20.2
STARTUP_MSG:& &build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
10:25:00,041 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=8090
10:25:00,051 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: master/192.168.235.129:8090
10:25:00,056 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
10:25:00,058 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
10:25:00,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,root
10:25:00,200 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
10:25:00,200 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
10:25:00,221 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
10:25:00,336 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
10:25:00,434 INFO org.apache.hadoop.mon.Storage: Number of files = 1
10:25:00,444 INFO org.apache.hadoop.mon.Storage: Number of files under construction = 0
10:25:00,445 INFO org.apache.hadoop.mon.Storage: Image file of size 96 loaded in 0 seconds.
10:25:00,446 INFO org.apache.hadoop.mon.Storage: Edits file /home/hadoop/hadoop_tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
10:25:00,456 INFO org.apache.hadoop.mon.Storage: Image file of size 96 saved in 0 seconds.
10:25:00,499 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 369 msecs
10:25:00,503 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
10:25:00,503 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
10:25:00,503 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
10:25:00,503 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of&&over-replicated blocks = 0
10:25:00,503 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
10:25:00,504 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
10:25:00,504 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
10:25:00,904 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
10:25:01,088 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
10:25:01,093 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
10:25:01,095 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
& & & & at java.lang.Thread.sleep(Native Method)
& & & & at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
& & & & at java.lang.Thread.run(Thread.java:662)
10:25:01,097 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
10:25:01,173 INFO org.apache.hadoop.ipc.Server: Stopping server on 8090
10:25:01,175 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
& & & & at sun.nio.ch.Net.bind(Native Method)
& & & & at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
& & & & at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
& & & & at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
& & & & at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:279)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)复制代码
论坛徽章:6
之前的进程没杀掉吧,能否描述一下杀进程的操作,以及相应的提示?
论坛徽章:10
vblvbl 发表于
改成8090端口&&还是报错
&&看看是否能帮助你
中级会员, 积分 386, 距离下一级还需 114 积分
论坛徽章:1
把数据文件删除,然后再重新format一下即可
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
cruiser 发表于
之前的进程没杀掉吧,能否描述一下杀进程的操作,以及相应的提示?
3楼的5行和6行代码就是杀对应进程的操作
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
rtdr 发表于
把数据文件删除,然后再重新format一下即可
我把数据目录都换过了 然后format的还是没成功
金牌会员, 积分 2723, 距离下一级还需 277 积分
论坛徽章:19
今天下班回家,重新打开虚拟机,启动后发现namenode已经起来了什么都没操作,看来还是进程没杀干净,昨天折腾到大半夜的,多谢楼上各位相助!
[hadoop@master hadoop_tmp]$ /usr/java/jdk1.6.0_37/bin/jps
3745 JobTracker
3846 Jps
3686 SecondaryNameNode
3529 NameNode
复制代码
扫一扫加入本版微信群

我要回帖

更多关于 dos命令复制目录 的文章

 

随机推荐