Cygwin中u盘格式化文件系统Hadoop的文件系统HDFS时报找不到主

主题信息(必填)
主题描述(最多限制在50个字符)
申请人信息(必填)
申请信息已提交审核,请注意查收邮件,我们会尽快给您反馈。
如有疑问,请联系
CSDN &《程序员》编辑/记者,投稿&纠错等事宜请致邮
你只管努力,剩下的交给时光!
个人大数据技术博客:
人生得意须尽欢,莫使金樽空对月。
datanode日志部分是这样的:
11:59:46,513 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/**************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:
host = HONG
STARTUP_MSG:
STARTUP_MSG:
version = 1.2.1
STARTUP_MSG:
-r 1503152; compiled by ‘mattf’ on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:
java = 1.8.0_60
**************************************************/
11:59:46,591 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
11:59:46,606 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
11:59:46,606 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
11:59:46,606 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
11:59:46,653 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
11:59:46,653 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
11:59:46,700 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
11:59:46,700 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Failed to set permissions of path: \hadoop\sysdata\dfs\data to 0755
11:59:46,700 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
11:59:46,700 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
11:59:46,716 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/**************************************************23:26 提问
windows平台安装Hadoop,启动报错No such file or directory
这几天在折腾windows下安装Hadoop,完全按照网上写的标准步骤。
参考博文:
好不容易到最后了,在启动Hadoop时,一直报错如标题。
格式化hdfs日志:
$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/07/13 23:07:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:
host = 58-PC/192.168.0.102
STARTUP_MSG:
args = [-format]
STARTUP_MSG:
version = 2.7.0
STARTUP_MSG:
classpath = D:\tools\cygwin32\home\lenovo\hadoop\etc\D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\activation-1.1.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\asm-3.2.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\avro-1.7.4.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-cli-1.2.D:\tools\cygwin32\home\lenovo\hadoop\share\hadoop\common\lib\commons-codec-1.4.D:\tools\cygwin32\home\lenovo\had
。。。。。。。。。。。。。。。。
STARTUP_MSG:
java = 1.8.0_31
************************************************************/
15/07/13 23:07:53 INFO namenode.NameNode: createNameNode [-format]
15/07/13 23:07:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-052de37d-497f-4dd3-80bc-6c6c8a26d5d0
15/07/13 23:07:55 INFO namenode.FSNamesystem: No KeyProvider found.
15/07/13 23:07:55 INFO namenode.FSNamesystem: fsLock is fair:true
15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/07/13 23:07:56 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/07/13 23:07:56 INFO blockmanagement.BlockManager: The block deletion will start around 2015 ???? 13 23:07:56
15/07/13 23:07:56 INFO util.GSet: Computing capacity for map BlocksMap
15/07/13 23:07:56 INFO util.GSet: VM type
15/07/13 23:07:56 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/07/13 23:07:56 INFO util.GSet: capacity
= 2^22 = 4194304 entries
15/07/13 23:07:56 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/07/13 23:07:56 INFO blockmanagement.BlockManager: defaultReplication
15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplication
15/07/13 23:07:56 INFO blockmanagement.BlockManager: minReplication
15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxReplicationStreams
15/07/13 23:07:56 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks
15/07/13 23:07:56 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/07/13 23:07:56 INFO blockmanagement.BlockManager: encryptDataTransfer
15/07/13 23:07:56 INFO blockmanagement.BlockManager: maxNumBlocksToLog
15/07/13 23:07:56 INFO namenode.FSNamesystem: fsOwner
= lenovo (auth:SIMPLE)
15/07/13 23:07:56 INFO namenode.FSNamesystem: supergroup
= supergroup
15/07/13 23:07:56 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/07/13 23:07:56 INFO namenode.FSNamesystem: HA Enabled: false
15/07/13 23:07:56 INFO namenode.FSNamesystem: Append Enabled: true
15/07/13 23:07:56 INFO util.GSet: Computing capacity for map INodeMap
按赞数排序
15/07/13 23:07:56 INFO util.GSet: VM type
15/07/13 23:07:56 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/07/13 23:07:56 INFO util.GSet: capacity
= 2^21 = 2097152 entries
15/07/13 23:07:56 INFO namenode.FSDirectory: ACLs enabled? false
15/07/13 23:07:56 INFO namenode.FSDirectory: XAttrs enabled? true
15/07/13 23:07:56 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/07/13 23:07:56 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/07/13 23:07:56 INFO util.GSet: Computing capacity for map cachedBlocks
15/07/13 23:07:56 INFO util.GSet: VM type
15/07/13 23:07:56 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/07/13 23:07:56 INFO util.GSet: capacity
= 2^19 = 524288 entries
15/07/13 23:07:56 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.6033
15/07/13 23:07:56 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/07/13 23:07:56 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
15/07/13 23:07:56 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/07/13 23:07:56 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/07/13 23:07:56 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/07/13 23:07:56 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/07/13 23:07:56 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/07/13 23:07:56 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/07/13 23:07:56 INFO util.GSet: VM type
15/07/13 23:07:56 INFO util.GSet: 0.447746% max memory 966.7 MB = 297.0 KB
15/07/13 23:07:56 INFO util.GSet: capacity
= 2^16 = 65536 entries
Re-format filesystem in Storage Directory \tmp\hadoop-lenovo\dfs\name ? (Y or N) y
15/07/13 23:08:25 INFO namenode.FSImage: Allocated new BlockPoolId: BP--192.168.0.102-2
15/07/13 23:08:25 INFO common.Storage: Storage directory \tmp\hadoop-lenovo\dfs\name has been successfully formatted.
15/07/13 23:08:26 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid &= 0
15/07/13 23:08:26 INFO util.ExitUtil: Exiting with status 0
15/07/13 23:08:26 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at 58-PC/192.168.0.102
************************************************************/
lenovo@58-PC ~/hadoop
$ bin/start-all.sh
-bash: bin/start-all.sh: No such file or directory
折腾几天了,急盼大神指教。
问题依然还没解决,自己顶一下
在使用cygwin装完hadoop后,输入命令
./hadoop version
出现如下错误
./hadoop:line 297:c:\java\jdk1.6.0_05\bin/bin/java:No such file or directory
./hadoop:line 345:c:\Java\jdk1.6.0_05\bin/bin/java:No such file o......答案就在这里:----------------------Hi,地球人,我是问答机器人小S,上面的内容就是我狂拽酷炫叼炸天的答案,除了赞同,你还有别的选择吗?
其他相关推荐1、有时会有这种 报错 java.io.IOException:Cannot run program &chmod&: CreateProcess error=2 。网上找到办法,就是在系统的环境变量里 的系统变量里的变量Path下添加cygwin中bin目录的路径&。比如:C:\cygwin\bin。原因的话,个人猜测是,你的程序想执行&chmod&,以便在cygwin模拟的linux系统里,创建目录之类的,这时需要cygwin里bin目录下的一些功能帮你实现。
2、有时安装eclipse的hadoop开发插件时,eclipse会找不到插件,或者各种报错,我折腾了很久,结果发现,装上一个最新版的eclipse MARS之后,完美解决。大概它还比较纯净吧,之前的eclipse被我装过太多插件了。
3、当你的程序想要访问hdfs时,并不容易。比如下面这段程序:
import java.io.IOE
import org.apache.hadoop.conf.C
import org.apache.hadoop.fs.FileS
import org.apache.hadoop.fs.FileS
import org.apache.hadoop.fs.P
public class MoveFile
&& &public static void main(String[] args) throws IOException
&& &&& &// TODO Auto-generated method stub
&& &&& &Configuration conf=new Configuration();
&& &&& &FileSystem hdfs=FileSystem.get(conf);
&& &&& &Path source=new Path(&/usr/hadoop/input&);
&& &&& &Path destination=new Path(&/&);
&& &&& &hdfs.copyFromLocalFile(source, destination);
&& &&& &FileStatus files[]=hdfs.listStatus(destination);
&& &&& &System.out.println(&Move to&+conf.get(&fs.default.name&));
&& &&& &for(FileStatus file:files)
&& &&& &&& &System.out.println(file.getPath());
它想将本地文件系统的文件(win8),移动到hdfs(cygwin模拟的linux上),结果发现,在本文标题所示的环境下,只是移动到了本地文件系统中。解决办法是,使用“FileSystem hdfs=FileSystem.get(URI.create(&hdfs://localhost:9000/&),conf);”获得hdfs的访问。至于原因,调试时发现“FileSystem hdfs=FileSystem.get(conf);”返回的文件系统是localfilesystem,也就是本地文件系统。猜测,cygwin模拟的并不完美,hadoop程序还是发现了我们是在win8上运行。。。又或者
eclipse并没有把代码都发给hdfs去执行,有时就是在本地文件系统中执行,所以不特别指定的话,默认是在本地文件系统执行?不管了,反正是访问文件系统的问题,坑就是。
解决办法2是,在Configuration conf = new Configuration(); 之后来上这么一句,conf.set(&fs.default.name&,&hdfs://localhost:9000&); 将conf默认的文件系统指定为hdfs。挺管用的,但有时还是需要FileSystem hdfs = FileSystem.get(URI.create(&hdfs://localhost:9000/&),conf);”,而且在MapReduce的设置里我已经这么设置过了,这里岂不重复?也是醉了,不知谁能解答。
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:962次
排名:千里之外
(window.slotbydup = window.slotbydup || []).push({
id: '4740881',
container: s,
size: '200,200',
display: 'inlay-fix'

我要回帖

更多关于 格式化文件系统 的文章

 

随机推荐