启动namenodee无法启动,在线求助

求助hadoop2.X分布式搭建两个NameNode均无法正常启动_百度知道
求助hadoop2.X分布式搭建两个NameNode均无法正常启动
我有更好的答案
Sla,如yum) 2 需要无密码登录的SSH环境(安装ssh及配置 步骤 1 配置hosts文件..,具体配置步骤在后面介绍) END 安装&#47。如图中Master,将主机名和对应IP地址映射.0以上版本(可自行解压安装或使用自带的软件安装包环境要求 1 需要安装JDK6
其他类似问题
为您推荐:
分布式的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁求助jps 为什么 只发现 SecondaryNameNode_百度知道
求助jps 为什么 只发现 SecondaryNameNode
不建议使用这个脚本.sh和start-mappred提示都说了,使用start-dfs.sh来替代它
其他类似问题
为您推荐:
jps的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁当前访客身份:游客 [
当前位置:
hive的元数据用的是mysql,版本是version=1.1.0-cdh5.4.0,一直用的好好的,今天在hive命令行中执行一条select语句,感觉速度很慢,于是强行停止了,结果再也进不去hive命令行了,同时发现NameNode也挂掉了。于是重启NameNode、DataNode和Client,再次通过执行hive还是进不去,报同样的错误:大侠救命啊,如果能解决问题,微信红包送上。以下是报错信息:
[ftp_user@Client110 hive]$ hive
15/09/02 13:40:23 WARN conf.HiveConf: HiveConf of name hive.mergejob.maponly does not exist
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
& & & & at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
& & & & at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
& & & & at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
& & & & at java.lang.reflect.Method.invoke(Method.java:606)
& & & & at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
& & & & at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
& & & & at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1488)
& & & & at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.&init&(RetryingMetaStoreClient.java:64)
& & & & at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74)
& & & & at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841)
& & & & at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860)
& & & & at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453)
& & & & ... 8 more
Caused by: java.lang.reflect.InvocationTargetException
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
& & & & at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1486)
& & & & ... 13 more
Caused by: javax.jdo.JDOFatalDataStoreException: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://192.168.210.101/metastore, username = hiveuser. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
com.mysql.jdbc.municationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
& & & & at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
& & & & at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
& & & & at com.mysql.jdbc.MysqlIO.&init&(MysqlIO.java:350)
& & & & at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2393)
& & & & at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2430)
& & & & at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2215)
& & & & at com.mysql.jdbc.ConnectionImpl.&init&(ConnectionImpl.java:813)
& & & & at com.mysql.jdbc.JDBC4Connection.&init&(JDBC4Connection.java:47)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
& & & & at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
& & & & at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:399)
& & & & at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:334)
& & & & at java.sql.DriverManager.getConnection(DriverManager.java:571)
& & & & at java.sql.DriverManager.getConnection(DriverManager.java:187)
& & & & at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
& & & & at com.jolbox.bonecp.BoneCP.&init&(BoneCP.java:416)
& & & & at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
& & & & at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)
& & & & at org.datanucleus.store.rdbms.RDBMSStoreManager.&init&(RDBMSStoreManager.java:298)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
& & & & at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
& & & & at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
& & & & at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
& & & & at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
& & & & at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
& & & & at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
& & & & at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
& & & & at java.lang.reflect.Method.invoke(Method.java:606)
& & & & at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
& & & & at java.security.AccessController.doPrivileged(Native Method)
& & & & at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
& & & & at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
& & & & at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
& & & & at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
& & & & at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
& & & & at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
& & & & at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
& & & & at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
& & & & at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
& & & & at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
& & & & at org.apache.hadoop.hive.metastore.RawStoreProxy.&init&(RawStoreProxy.java:56)
& & & & at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:65)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:579)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:557)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:610)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:448)
& & & & at org.apache.hadoop.hive.metastore.RetryingHMSHandler.&init&(RetryingHMSHandler.java:66)
& & & & at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5601)
& & & & at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.&init&(HiveMetaStoreClient.java:193)
& & & & at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.&init&(SessionHiveMetaStoreClient.java:74)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
& & & & at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1486)
& & & & at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.&init&(RetryingMetaStoreClient.java:64)
& & & & at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74)
& & & & at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841)
& & & & at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860)
& & & & at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453)
& & & & at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
& & & & at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
& & & & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
& & & & at java.lang.reflect.Method.invoke(Method.java:606)
& & & & at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
& & & & at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
& & & & at java.net.PlainSocketImpl.socketConnect(Native Method)
& & & & at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
& & & & at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
& & & & at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
& & & & at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
& & & & at java.net.Socket.connect(Socket.java:579)
& & & & at java.net.Socket.connect(Socket.java:528)
& & & & at java.net.Socket.&init&(Socket.java:425)
& & & & at java.net.Socket.&init&(Socket.java:241)
& & & & at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:257)
& & & & at com.mysql.jdbc.MysqlIO.&init&(MysqlIO.java:300)
& & & & ... 75 more
共有2个答案
<span class="a_vote_num" id="a_vote_num_
很明显 meatdata 数据库 连接不上,检查一下你的mysql数据库
<span class="a_vote_num" id="a_vote_num_
看这个日志没用啊,看你的hive metastore的日志啊。。
更多开发者职位上
有什么技术问题吗?
类似的话题当前访客身份:游客 [
年后源创会召集令来了,这次来到我们的主场——深圳!三月春光明媚,我们相约鹏城。我们依旧秉承着“自由...
今天Linus Torvalds和辛苦工作的内核团队非常自豪的宣布了Linux Kernel 4.5版本,并已经开放下载。自201...
最新整包项目
最新找人项目
最新作品与服务
By 乌合之众
20回/1994阅
By 包包大人553
By 华人科技
26回/4649阅
By 让往事随风
By 求是科技
16回/758阅
By ltzh_14
15回/471阅
By Rest721
By geminiblue
12回/228阅
By 开源中国首席总统
热门招聘城市:
热门招聘企业
最新招聘信息
10-20K/深圳
10-18K/深圳
10-15K/深圳
15-25K/深圳
10-15K/深圳
15-25K/深圳
4-10K/苏州
5-10K/上海
26评/1492阅
15评/928阅
19评/952阅
By zheng_chao
25评/1722阅
By Ascend_
16评/1647阅
By 喜爱糖葫芦
49评/4597阅
IdleMan:C#的dynamic真是个好东西
osc首席大表哥:公司新招半年工作经验java员工,开的比呆了两年的老员工多2k,嘛意思~
href="http://my.oschina.net/u/4zzb:夜黑风高,今天是个撸代码的好日子
龙影:哥我英语考试把英语老师整哭了,那时候我好不容易考及格了,老师对大家说,瞧瞧这孩子都考及格了,你们咋就都没及格呢? 哎老师其实你是不知道,我的答案都是蒙的
zp-wmhx:家里配了个电脑,想买块2t的硬盘, 选哪个比较合适?
本周推荐 Kerkee
是来自搜狐的一个多主体共存型移动Hybrid框架,具有跨平台、用户体验好、性能高、扩展性好、灵活性强、易维护、规范化、集成云服务、具有Debug环境、彻底解决跨域问题。
最新推荐博客文章
团队协作开发平台,周报,便签,任务管理,应有尽有,轻松管理轻量级团队。
开源中国社区团队基于开源项目 GitLab 开发的在线代码托管平台。
开源中国基于Sonar打造的代码质量管理系统,与 Git@OSC 紧密结合。
开源中国社区团队跟 MoPaaS 合作,为开发者提供更可靠的代码托管和演示服务。
提供在线CSS/JS 调试,在线API文档,Less CSS编译器等在线工具。
在线编辑测试JS/HTML/CSS的工具,实时预览、保存、分享、Fork。
开源中国社区团队基于 Sonatype Nexus OSS 开发的 Maven 镜像管理库。
扫一扫,关注OSChina微信公共账号
+ 友情链接namenode无法启动,在线求助-大数据(hadoop系列)-about云开发 - Powered by Discuz! Archiver
ywdlucking
namenode无法启动,在线求助
我搭建了两台hadoop的集群
之前运行正常
这次我不小心删了什么文件,启动后发现,主节点上得namenode无法启动,从节点正常
错误日志:
17:38:56,865 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:& &host = msg-01/10.1.65.121
STARTUP_MSG:& &args = []
STARTUP_MSG:& &version = 2.4.1
STARTUP_MSG:& &classpath = /opt/hadoop/hadoop-2.4.1/etc/hadoop:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/junit-4.8.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/zookeeper-3.4.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-nfs-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/junit-4.10.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/opt/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:& &build = http://svn.apache.org/repos/asf/hadoop/common -r 1604318; compiled by 'jenkins' on T05:43Z
STARTUP_MSG:& &java = 1.6.0_45
************************************************************/
17:38:56,875 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for
17:38:56,878 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
17:38:57,127 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
17:38:57,229 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
17:38:57,229 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
17:38:57,232 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://msg-01:9000
17:38:57,233 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use msg-01:9000 to access this namenode/service.
17:38:57,364 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17:38:57,564 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
17:38:57,564 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
17:38:57,611 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
17:38:57,615 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
17:38:57,628 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
17:38:57,630 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
17:38:57,630 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
17:38:57,630 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
17:38:57,685 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
17:38:57,686 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
17:38:57,719 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
17:38:57,719 INFO org.mortbay.log: jetty-6.1.26
17:38:57,935 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
17:38:57,981 INFO org.mortbay.log: Started :50070
17:38:58,016 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
17:38:58,016 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
17:38:58,057 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
17:38:58,089 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read includes:
17:38:58,089 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read excludes:
17:38:58,092 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17:38:58,092 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17:38:58,095 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
17:38:58,095 INFO org.apache.hadoop.util.GSet: VM type& && & = 64-bit
17:38:58,100 INFO org.apache.hadoop.util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
17:38:58,101 INFO org.apache.hadoop.util.GSet: capacity& && &= 2^21 = 2097152 entries
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication& && && &= 1
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication& && && && & = 512
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication& && && && & = 1
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams& && &= 2
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks= false
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer& && &= false
17:38:58,106 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog& && && & = 1000
17:38:58,112 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner& && && && & = root (auth:SIMPLE)
17:38:58,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup& && && & = supergroup
17:38:58,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
17:38:58,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
17:38:58,115 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
17:38:58,373 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
17:38:58,373 INFO org.apache.hadoop.util.GSet: VM type& && & = 64-bit
17:38:58,373 INFO org.apache.hadoop.util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
17:38:58,373 INFO org.apache.hadoop.util.GSet: capacity& && &= 2^20 = 1048576 entries
17:38:58,374 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
17:38:58,381 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
17:38:58,381 INFO org.apache.hadoop.util.GSet: VM type& && & = 64-bit
17:38:58,381 INFO org.apache.hadoop.util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
17:38:58,381 INFO org.apache.hadoop.util.GSet: capacity& && &= 2^18 = 262144 entries
17:38:58,382 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.6033
17:38:58,382 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17:38:58,382 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension& &= 30000
17:38:58,384 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
17:38:58,384 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17:38:58,386 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
17:38:58,386 INFO org.apache.hadoop.util.GSet: VM type& && & = 64-bit
17:38:58,386 INFO org.apache.hadoop.util.GSet: 0.447746% max memory 888.9 MB = 273.1 KB
17:38:58,387 INFO org.apache.hadoop.util.GSet: capacity& && &= 2^15 = 32768 entries
17:38:58,391 INFO org.apache.hadoop.hdfs.server.namenode.AclConfigFlag: ACLs enabled? false
17:38:58,400 INFO org.apache.hadoop.mon.Storage: Lock on /opt/hadoop/hadoop-2.4.1/name/in_use.lock acquired by nodename
17:38:58,543 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /opt/hadoop/hadoop-2.4.1/name/current
17:38:58,664 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
17:38:58,699 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
17:38:58,699 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /opt/hadoop/hadoop-2.4.1/name/current/fsimage_0000000
17:38:58,703 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStrea expecting start txid #1
17:38:58,704 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /opt/hadoop/hadoop-2.4.1/name/current/edits_0687
17:38:58,705 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/opt/hadoop/hadoop-2.4.1/name/current/edits_0687' to transaction ID 1
17:38:58,754 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: There appears to be a gap in the edit log.We expected txid 1, but got txid 686.
& & & & at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:205)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:805)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:665)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:272)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:724)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:708)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1424)
17:38:58,771 INFO org.mortbay.log: Stopped :50070
17:38:58,774 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
17:38:58,775 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
17:38:58,775 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
17:38:58,775 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: There appears to be a gap in the edit log.We expected txid 1, but got txid 686.
& & & & at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:205)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:805)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:665)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:272)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
& & & & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:724)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.&init&(NameNode.java:708)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
& & & & at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1424)
17:38:58,777 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
17:38:58,779 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at msg-01/10.1.65.121
************************************************************/
没有头绪,请大神指点一下!!!!
ywdlucking
没人回答只能靠自己了。。。。。打开hdfs-site.xml里配置的namenode对应的目录
然后查看下面有没有txid 1 没有得话,清空,重新格式化namenode
We expected txid 1, but got txid 686.不一致,可能多次格式的原因
查看完整版本:

我要回帖

更多关于 启动namenode命令 的文章

 

随机推荐