hadoop2.6集群搭建.0集群搭建问题求助

把Hadoop源码关联到eclipse中ant下载地址:http:
本教程为 李华明 编著的iOS-Cocos2d游戏开发系列教程:教程涵盖关于i......
专题主要学习DirectX的初级编程入门学习,对Directx11的入门及初学者有......
&面向对象的JavaScript&这一说法多少有些冗余,因为JavaScript 语言本......
Windows7系统专题 无论是升级操作系统、资料备份、加强资料的安全及管......hadoop2.6.0的eclipse插件编译和设置,hadoop2.6.0,eclipse,插件,编译,设置,编_LINUX_【联盟电脑】
您现在的位置: >
hadoop2.6.0的eclipse插件编译和设置
编译hadoop2.6.0的eclipse插件下载源码:git clone /winghc/hadoop2x-eclipse-plugin.git编译源码:&code class=&functions&&cd&/code& &code class=&plain&&src/contrib/eclipse-plugin&/code&ant jar -Dversion=2.6.0 -Declipse.home=/usr/local/eclipse -Dhadoop.home=/usr/local/hadoop-2.6.0
//需要手动安装的eclipse,通过命令行一键安装的不行eclipse.home 和 hadoop.home 设置成你自己的环境路径生成位置: [jar] Building jar: /home/hunter/hadoop2x-eclipse-plugin/build/contrib/eclipse-plugin/hadoop-eclipse-plugin-2.6.0.jar安装插件登录桌面后面要打开eclipse的用户最好是hadoop的管理员,也就是hadoop安装时设置的那个用户,否则会出现拒绝读写权限问题。复制编译好的jar到eclipse插件目录,重启eclipse配置 hadoop 安装目录window -&preference -& hadoop Map/Reduce -& Hadoop installation directory配置Map/Reduce 视图window -&Open Perspective -& other-&Map/Reduce -& 点击“OK”windows →&show view →&other-&Map/Reduce Locations-& 点击“OK”控制台会多出一个“Map/Reduce Locations”的Tab页在“Map/Reduce Locations” Tab页 点击图标&大象+&或者在空白的地方右键,选择“New Hadoop location…”,弹出对话框“New hadoop location…”,配置如下内容:注意:MR Master和DFS Master配置必须和mapred-site.xml和core-site.xml等配置文件一致打开Project Explorer,查看HDFS文件系统。新建Map/Reduce任务File-&New-&project-&Map/Reduce Project-&Next编写WordCount类:记得先把服务都起来import java.io.IOEimport java.util.*;import org.apache.hadoop.fs.Pimport org.apache.hadoop.conf.*;import org.apache.hadoop.io.*;import org.apache.hadoop.mapred.*;import org.apache.hadoop.util.*;public class WordCount {public static class Map extends MapReduceBase implements Mapper&LongWritable, Text, Text, IntWritable& {private final static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(LongWritable key, Text value, OutputCollector&Text, IntWritable& output, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one); }}}public static class Reduce extends MapReduceBase implements Reducer&Text, IntWritable, Text, IntWritable& {public void reduce(Text key, Iterator&IntWritable& values, OutputCollector&Text, IntWritable& output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) {
sum += values.next().get(); } output.collect(key, new IntWritable(sum));}}public static void main(String[] args) throws Exception {JobConf conf = new JobConf(WordCount.class);conf.setJobName(&wordcount&);conf.setOutputKeyClass(Text.class);conf.setOutputValueClass(IntWritable.class);conf.setMapperClass(Map.class);conf.setReducerClass(Reduce.class);conf.setInputFormat(TextInputFormat.class);conf.setOutputFormat(TextOutputFormat.class);FileInputFormat.setInputPaths(conf, new Path(args[0]));FileOutputFormat.setOutputPath(conf, new Path(args[1]));JobClient.runJob(conf);}}配置运行时参数:右键--&Run as--&Run Confiugrationsinput是你上传在hdfs的文件夹,里面放要处理的文件。ouput4放输出结果将程序放在hadoop集群上运行:右键--&Runas --&Run on Hadoop,最终的输出结果会在HDFS相应的文件夹下显示。至此,ubuntu下hadoop-2.6.0 eclipse插件配置完成。配置过程中出先的问题:在eclipse中无法向文件HDFS文件系统写入的问题,这将直接导致eclipse下编写的程序不能在hadoop上运行。打开conf/hdfs-site.xml,找到dfs.permissions属性修改为false(默认为true)OK了。& && &&&&property&& && && && &&name&dfs.permissions&/name&& && && && &&value&false&/value&& && &&&&/property&& & 改完需要重启HDFS;最简单的就是刚才说的登录桌面启动eclipse的用户本身就是hadoop的管理员
(责任编辑:联盟电脑)
更多相关资讯
1、LVS的定义? LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集.....
【联盟电脑】部分内容自于互联网,其言论不代表本站观点,若本站侵犯到您的版权,请与站长联系,我们将在第一时间核实并删除!
版权所有 & 【联盟电脑】 | 专注分享有价值、实用的电脑技术和知识
Copyright &
All rights reserved. 京ICP备号Hadoop-2.6.0环境搭建精简极致指导 - 看引擎 KENGINE | 看看新闻网 IT资讯
Hadoop-2.6.0环境搭建精简极致指导
-2.6.0环境搭建精简极致指导
从官网/hadoop/common/
下载hadoop
从官网下载JDK
/technetwork/java/javase/downloads/index.html(1.8.25)
hadoop-example的jar用来简单测试
/Code/Jar/h/Downloadhadoopexamples111jar.htm
准备3~4台机器
本人这次是准备了3台虚拟机。
1台master,2个slave
安装64位操作系统(如REHL 6.5)
设置主机名字(便于统筹规划)
192.168.1.200 master
192.168.1.201 slave1
192.168.1.202 slave2
将IP解析复制到每个机器的/etc/hosts中。
设置ssh无密码访问(实现主节点到所有从节点即可)
各个节点运行ssh-keygen -t rsa ,然后将~/.ssh/id_rsa.pub 文件中的内容都加入到master节点中的~/.ssh/authorized_keys 文件中。
解压JDK包如下:
tar zxvfjdk-8u25-linux-x64.gz
编辑配置文件
vi/etc/profile
加入如下:
JAVA_HOME=/opt/jdk
CLASSPATH=.:$JAVA_HOME/lib.tools.jar
PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
exportJAVA_HOME CLASSPATH PATH
运行如下进行确认JDK安装
java-version
安装Hadoop
tar -zxvfhadoop-2.6.0.tar.gz
编辑配置文件
vi/etc/profile
HADOOP_HOME=/opt/hadoop
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
exportHADOOP_HOME HADOOP_CONF_DIR PATH
配置文件生效
source/etc/profile
修改core-site.xml
/opt/hadoop/etc/hadoop下的core-site.xml
&configuration&
&property&
&name&fs.defaultFS&/name&
&value&hdfs://master:9000&/value&
&description&NameNode URI.&/description&
&/property&
&property&
&name&io.file.buffer.size&/name&
&value&131072&/value&
&description&Size of read/write buffer used inSequenceFiles.&/description&
&/property&
&property&
&name&hadoop.tmp.dir&/name&
&value&/data/hadoop/tmp&/value&
&description&A base for other temporary directories.&/description&
&/property&
&/configuration&
编辑hdfs-site.xml
&configuration&
&property&
&name&dfs.namenode.secondary.http-address&/name&
&value&master:50090&/value&
&description&The secondary namenode http server address andport.&/description&
&/property&
&property&
&name&dfs.namenode.name.dir&/name&
&value&file:///data/dfs/name&/value&
&description&Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently.&/description&
&/property&
&property&
&name&dfs.datanode.data.dir&/name&
&value&file:///data/dfs/data&/value&
&description&Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks.&/description&
&/property&
&property&
&name&dfs.namenode.checkpoint.dir&/name&
&value&file:///data/dfs/namesecondary&/value&
&description&Determines where on the local filesystem the DFSsecondary name node should store the temporary images to merge. If this is acomma-delimited list of directories then the image is replicated in all of thedirectories for redundancy.&/description&
&/property&
&/configuration&
编辑slaves文件
先格式化namenode
hdfs namenode –format
设置/opt/hadoop/etc/hadoop/hadoop-env.sh文件中的JAVA变量。
start-dfs.sh
主节点:[root@master sbin]# jps
2291 DataNode
2452SecondaryNameNode
2170 NameNode
[root@slave1.ssh]# jps
1841 DataNode
编辑yarn-site.xml
&configuration&
&property&
&name&yarn.resourcemanager.hostname&/name&
&value&master&/value&
&description&The hostname of theRM.&/description&
&/property&
&property&
&name&yarn.nodemanager.aux-services&/name&
&value&mapreduce_shuffle&/value&
&description&Shuffle service that needs to be set for Map Reduceapplications.&/description&
&/property&
&/configuration&
编辑mapred-site.xml
&configuration&
&property&
&name&mapreduce.framework.name&/name&
&value&yarn&/value&
&description&Theruntime framework for executing MapReduce jobs. Can be one of local, classic oryarn.&/description&
&/property&
&property&
&name&mapreduce.jobhistory.address&/name&
&value&master:10020&/value&
&description&MapReduce JobHistoryServer IPC host:port&/description&
&/property&
&property&
&name&mapreduce.jobhistory.webapp.address&/name&
&value&master:19888&/value&
&description&MapReduce JobHistoryServer Web UI host:port&/description&
&/property&
&/configuration&
启动yarn资源管理
执行如下:
start-yarn.sh
执行jps进行查看。
[root@master sbin]# jps
2720 NodeManager
2291 DataNode
2452 SecondaryNameNode
2170 NameNode
2621 ResourceManager
[root@slave1 .ssh]# jps
1841 DataNode
1958 NodeManager
如:在master运行如下命令:
#hadoop jar hadoop-examples-1.2.1.jar pi 1 10
该命令测试分布式计算性能,计算pi的值。第1个10指的是要运行10次map任务 第2个数字指的是每个map任务拆分多少个job.
IE进行图形化观察
安装和配置Hadoop2.2.0
13.04上搭建Hadoop环境
Ubuntu 12.10 +Hadoop 1.2.1版本集群配置
Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)
Ubuntu下Hadoop环境的配置
单机版搭建Hadoop环境图文教程详解
搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)
更多Hadoop相关信息见 专题页面
本文永久更新链接地址:
No tags for this post.
除非注明,本站文章均为原创或编译,转载请注明: 文章来自
分享给朋友:
查看全部:hadoop2.6.0版本集群环境搭建 - stark_summer的专栏
- 博客频道 - CSDN.NET
6184人阅读
一、环境说明
1、机器:一台物理机 和一台虚拟机
2、linux版本:[spark@S1PA11 ~]$ cat /etc/issue
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
3、JDK:&[spark@S1PA11 ~]$ java -version
java version &1.6.0_27&
Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
4、集群节点:两个&S1PA11(Master),S1PA222(Slave)
二、准备工作
1、安装Java jdk前一篇文章撰写了:
2、ssh免密码验证 :
3、下载Hadoop版本:
三、安装Hadoop
这是下载后的hadoop-2.6.0.tar.gz压缩包, &&
1、解压 tar -xzvf&hadoop-2.6.0.tar.gz&
2、move到指定目录下:[spark@S1PA11 software]$ mv hadoop-2.6.0 ~/opt/&
3、进入hadoop目前 &[spark@S1PA11 opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin &dfs &etc &include &input &lib &libexec &LICENSE.txt &logs &NOTICE.txt &README.txt &sbin &share &tmp
&配置之前,先在本地文件系统创建以下文件夹:~/hadoop/tmp、~/dfs/data、~/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
4、进去hadoop配置文件目录
[spark@S1PA11 hadoop-2.6.0]$ cd etc/hadoop/
[spark@S1PA11 hadoop]$ ls
capacity-scheduler.xml &hadoop-env.sh & & & & & & & httpfs-env.sh & & & & & &kms-env.sh & & & & & &mapred-env.sh & & & & & & & ssl-client.xml.example
configuration.xsl & & & hadoop-metrics2.properties &httpfs-log4j.properties &kms-log4j.properties &mapred-queues.xml.template &ssl-server.xml.example
container-executor.cfg &hadoop-metrics.properties & httpfs-signature.secret &kms-site.xml & & & & &mapred-site.xml & & & & & & yarn-env.cmd
core-site.xml & & & & & hadoop-policy.xml & & & & & httpfs-site.xml & & & & &log4j.properties & & &mapred-site.xml.template & &yarn-env.sh
hadoop-env.cmd & & & & &hdfs-site.xml & & & & & & & kms-acls.xml & & & & & & mapred-env.cmd & & & &slaves & & & & & & & & & & &yarn-site.xml
4.1、配置&hadoop-env.sh文件--&修改JAVA_HOME
# The java implementation to use.
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
4.2、配置&yarn-env.sh 文件--&&修改JAVA_HOME
# some Java parameters
&export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
4.3、配置slaves文件--&&增加slave节点&
4.4、配置&core-site.xml文件--&&增加hadoop核心配置(hdfs文件端口是9000、file:/home/spark/opt/hadoop-2.6.0/tmp、)
&configuration&
&&property&
& &name&fs.defaultFS&/name&
& &value&hdfs://S1PA11:9000&/value&
&&/property&
&&property&
& &name&io.file.buffer.size&/name&
& &value&131072&/value&
&&/property&
&&property&
& &name&hadoop.tmp.dir&/name&
& &value&file:/home/spark/opt/hadoop-2.6.0/tmp&/value&
& &description&Abasefor other temporary directories.&/description&
&&/property&
&&property&
& &name&hadoop.proxyuser.spark.hosts&/name&
& &value&*&/value&
&&/property&
&property&
& &name&hadoop.proxyuser.spark.groups&/name&
& &value&*&/value&
&&/property&
&/configuration&
4.5、配置&&hdfs-site.xml&文件--&&增加hdfs配置信息(namenode、datanode端口和目录位置)
&configuration&
&&property&
& &name&dfs.namenode.secondary.http-address&/name&
& &value&S1PA11:9001&/value&
&&/property&
& &property&
& &&name&dfs.namenode.name.dir&/name&
& &&value&file:/home/spark/opt/hadoop-2.6.0/dfs/name&/value&
&&/property&
&&property&
& &name&dfs.datanode.data.dir&/name&
& &value&file:/home/spark/opt/hadoop-2.6.0/dfs/data&/value&
& &/property&
&&property&
& &name&dfs.replication&/name&
& &value&3&/value&
&&/property&
&&property&
& &name&dfs.webhdfs.enabled&/name&
& &value&true&/value&
&&/property&
&/configuration&
4.6、配置 &mapred-site.xml&文件--&&增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)
&configuration&
& &property&
& &&name&mapreduce.framework.name&/name&
& &&value&yarn&/value&
&&/property&
&&property&
& &name&mapreduce.jobhistory.address&/name&
& &value&S1PA11:10020&/value&
&&/property&
&&property&
& &name&mapreduce.jobhistory.webapp.address&/name&
& &value&S1PA11:19888&/value&
&&/property&
&/configuration&
&&yarn-site.xml&&文件--&&增加yarn功能
&configuration&
& &property&
& &&name&yarn.nodemanager.aux-services&/name&
& &&value&mapreduce_shuffle&/value&
& &/property&
& &property&
& &&name&yarn.nodemanager.aux-services.mapreduce.shuffle.class&/name&
& &&value&org.apache.hadoop.mapred.ShuffleHandler&/value&
& &/property&
& &property&
& &&name&yarn.resourcemanager.address&/name&
& &&value&S1PA11:8032&/value&
& &/property&
& &property&
& &&name&yarn.resourcemanager.scheduler.address&/name&
& &&value&S1PA11:8030&/value&
& &/property&
& &property&
& &&name&yarn.resourcemanager.resource-tracker.address&/name&
& &&value&S1PA11:8035&/value&
& &/property&
& &property&
& &&name&yarn.resourcemanager.admin.address&/name&
& &&value&S1PA11:8033&/value&
& &/property&
& &property&
& &&name&yarn.resourcemanager.webapp.address&/name&
& &&value&S1PA11:8088&/value&
& &/property&
&/configuration&
5、将配置好的hadoop文件copy到另一台slave机器上
[spark@S1PA11
opt]$ scp -r hadoop-2.6.0/ spark@10.126.34.43:~/opt/
1、格式化namenode:
[spark@S1PA11
opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin &dfs &etc &include &input &lib &libexec &LICENSE.txt &logs &NOTICE.txt &README.txt &sbin &share &tmp
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs namenode -format
[spark@S1PA222
.ssh]$ cd ~/opt/hadoop-2.6.0
[spark@S1PA222 hadoop-2.6.0]$ ./bin/hdfs &namenode -format
2、启动hdfs:
[spark@S1PA11
hadoop-2.6.0]$ ./sbin/start-dfs.sh&
15/01/05 16:41:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [S1PA11]
S1PA11: starting namenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-namenode-S1PA11.out
S1PA222: starting datanode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-datanode-S1PA222.out
Starting secondary namenodes [S1PA11]
S1PA11: starting secondarynamenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-secondarynamenode-S1PA11.out
15/01/05 16:41:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
19781 ResourceManager
3、停止hdfs:
[spark@S1PA11 hadoop-2.6.0]$./sbin/stop-dfs.sh&
15/01/05 16:40:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [S1PA11]
S1PA11: stopping namenode
S1PA222: stopping datanode
Stopping secondary namenodes [S1PA11]
S1PA11: stopping secondarynamenode
15/01/05 16:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
22230 Master
22478 Worker
19781 ResourceManager
4、启动yarn:
[spark@S1PA11 hadoop-2.6.0]$./sbin/start-yarn.sh&
starting yarn daemons
starting resourcemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-resourcemanager-S1PA11.out
S1PA222: starting nodemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-nodemanager-S1PA222.out
[spark@S1PA11 hadoop-2.6.0]$ jps
31233 ResourceManager
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
5、停止yarn:
[spark@S1PA11 hadoop-2.6.0]$ ./sbin/stop-yarn.sh&
stopping yarn daemons
stopping resourcemanager
S1PA222: stopping nodemanager
no proxyserver to stop
[spark@S1PA11 hadoop-2.6.0]$ jps
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
6、查看集群状态:
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs dfsadmin -report
15/01/05 16:44:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity:
(48.52 GB)
Present Capacity:
(42.61 GB)
DFS Remaining:
(42.61 GB)
DFS Used: 4 KB)
DFS Used%: 0.00%
Under replicated blocks: 10
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.126.45.56:50010 (S1PA222)
Hostname: S1PA209
Decommission Status : Normal
Configured Capacity:
(48.52 GB)
DFS Used: 4 KB)
Non DFS Used:
DFS Remaining:
(42.61 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 05 16:44:50 CST 2015
7、查看hdfs:http://10.58.44.47:50070/
8、查看RM:http://10.58.44.47:8088/
9、运行wordcount程序
9.1、创建 input目录:[spark@S1PA11 hadoop-2.6.0]$ mkdir input
9.2、在input创建f1、f2并写内容
[spark@S1PA11 hadoop-2.6.0]$ cat input/f1&
Hello world &bye jj
[spark@S1PA11 hadoop-2.6.0]$ cat input/f2
Hello Hadoop &bye Hadoop
9.3、在hdfs创建/tmp/input目录
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs &-mkdir /tmp
15/01/05 16:53:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs &-mkdir /tmp/input
15/01/05 16:54:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.4、将f1、f2文件copy到hdfs /tmp/input目录
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs &-put input/ /tmp
15/01/05 16:56:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.5、查看hdfs上是否有f1、f2文件
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -ls /tmp/input/
15/01/05 16:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- & 3 spark supergroup & & & & 20
19:09 /tmp/input/f1
-rw-r--r-- & 3 spark supergroup & & & & 25
19:09 /tmp/input/f2
9.6、执行wordcount程序
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output
15/01/05 17:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/01/05 17:00:09 INFO client.RMProxy: Connecting to ResourceManager at S1PA11/10.58.44.47:8032
15/01/05 17:00:11 INFO input.FileInputFormat: Total input paths to process : 2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: number of splits:2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_2_0001
15/01/05 17:00:12 INFO impl.YarnClientImpl: Submitted application application_2_0001
15/01/05 17:00:12 INFO mapreduce.Job: The url to track the job: http://S1PA11:8088/proxy/application_2_0001/
15/01/05 17:00:12 INFO mapreduce.Job: Running job: job_2_0001
9.7、查看执行结果
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -cat /output/part-r-0000
15/01/05 17:06:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:67621次
积分:1577
积分:1577
排名:第12939名
原创:91篇
评论:29条
专注于开发分布式任务调度框架、分布式同步RPC、异步MQ消息队列、分布式日志检索框架、hadoop、spark、scala等技术
写博客很辛苦的,如果我的写的文章能对您有帮助,请您能给点捐助(支付宝账号:),谢谢大家的支持了
个人所有博客,目前三个博客文章同步更新:
目前本人支持个人和公司技术难题解决和开发工作、技术咨询和培训
微信:stark-summer
邮箱:stark_
(6)(9)(8)(49)(3)(1)(2)(1)(9)(5)
本博客内容,由本人精心整理
欢迎交流,欢迎转载,大家转载注明出处,禁止用于商业目的。

我要回帖

更多关于 hadoop2.6.0 安装 的文章

 

随机推荐