通过url怎样查看mapreduce启动tomcat是否启动成功成功

Hadoop 调试第一个MapReduce程序过程详细记录总结_服务器应用_Linux公社-Linux系统门户网站
你好,游客
Hadoop 调试第一个MapReduce程序过程详细记录总结
来源:Linux社区&
作者:黄杉
开发环境搭建参考& & & 在Windows7操作系统下使用Eclipse来搭建Hadoop开发环境&:
1,程序代码如下:
import java.io.IOEimport java.util.StringTimport org.apache.hadoop.conf.Cimport org.apache.hadoop.fs.Pimport org.apache.hadoop.io.IntWimport org.apache.hadoop.io.Timport org.apache.hadoop.mapreduce.Jimport org.apache.hadoop.mapreduce.Mimport org.apache.hadoop.mapreduce.Rimport org.apache.hadoop.mapreduce.lib.input.FileInputFimport org.apache.hadoop.mapreduce.lib.output.FileOutputFimport org.apache.hadoop.util.GenericOptionsP
public class W2 {
public static class TokenizerMapper extends Mapper&Object, Text, Text, IntWritable& {private final static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(Object key, Text value, Context context)throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {word.set(itr.nextToken());context.write(word, one);}}}
public static class IntSumReducer extendsReducer&Text, IntWritable, Text, IntWritable& {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable&IntWritable& values,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}result.set(sum);context.write(key, result);}}
public static void main(String[] args) throws Exception {Configuration conf = new Configuration();& & & & System.setProperty("hadoop.home.dir", "E:/hadoop/hadoop-2.3.0");String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();if (otherArgs.length != 2) {System.err.println("Usage: wordcount &in& &out&");System.exit(2);}
Job job = new Job(conf, "word count");job.setJarByClass(W2.class);job.setMapperClass(TokenizerMapper.class);job.setCombinerClass(IntSumReducer.class);job.setReducerClass(IntSumReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);FileInputFormat.addInputPath(job, new Path(otherArgs[0]));FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));System.exit(job.waitForCompletion(true) ? 0 : 1);}}
2,运行方式:
在eclipse中W2.java代码区点击右键,点击里面的run on hadoop即可运行该程序。
3,运行报错(1):
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
& & at org.apache.hadoop.conf.Configuration$DeprecationDelta.&init&(Configuration.java:314)
& & at org.apache.hadoop.conf.Configuration$DeprecationDelta.&init&(Configuration.java:327)
& & at org.apache.hadoop.conf.Configuration.&clinit&(Configuration.java:409)
& & at wc.WordCount.main(WordCount.java:82)
Caused by: java.lang.ClassNotFoundException: mon.base.Preconditions
& & at java.net.URLClassLoader$1.run(Unknown Source)
& & at java.net.URLClassLoader$1.run(Unknown Source)
& & at java.security.AccessController.doPrivileged(Native Method)
& & at java.net.URLClassLoader.findClass(Unknown Source)
& & at java.lang.ClassLoader.loadClass(Unknown Source)
& & at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
& & at java.lang.ClassLoader.loadClass(Unknown Source)
& & ... 4 more
少了guava-r07.jar包。
4,运行报错(2):
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
缺少hadoop-auth-2.2.0.jar包,这个包在. /eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/hadoop-auth-2.2.0.jar里面
5,运行报错(3):
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
缺少2个包:
/usr/local/eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/slf4j-api-1.7.5.jar
/usr/local/eclipse/configuration/org.eclipse.osgi/bundles/230/1/.cp/lib/slf4j-log4j12-1.7.5.jar
6,运行报错(4):
在Eclipse运行hadoop报错:
20:12:01,750 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - fs.default.name is deprecated. Instead, use fs.defaultFSSLF4J: This version of SLF4J requires log4j version 1.2.12 or later. See also http://www.slf4j.org/codes.html#log4j_version 20:12:02,760 WARN& [main] util.NativeCodeLoader (NativeCodeLoader.java:&clinit&(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20:12:02,812 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(336)) - Failed to locate the winutils binary in the hadoop binary pathjava.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
代码里加上 System.setProperty("hadoop.home.dir", "d:/hadoop");并查看Windows环境下Hadoop目录下的bin目录下有没有winutils.exe,没有就下一个拷贝过去。
7,运行报错(5):
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException
& & at org.apache.hadoop.ipc.ProtobufRpcEngine.&clinit&(ProtobufRpcEngine.java:69)
& & at java.lang.Class.forName0(Native Method)
缺乏/usr/local/app/apache-tomcat-6.0.37_9090/webapps/solr/WEB-INF/lib/protobuf-java-2.4.0a.jar
Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AppendRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldS
需要换成protobuf-java-2.5.0.jar包。
8,运行报错(6):
Caused by: java.lang.ClassNotFoundException: mon.cache.CacheBuilder
& & at java.net.URLClassLoader$1.run(Unknown Source)
& & at java.net.URLClassLoader$1.run(Unknown Source)
& & at java.security.AccessController.doPrivileged(Native Method)
& & at java.net.URLClassLoader.findClass(Unknown Source)
& & at java.lang.ClassLoader.loadClass(Unknown Source)
& & at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
& & at java.lang.ClassLoader.loadClass(Unknown Source)
& & ... 12 more
少guava-11.0.2.jar包
9,运行报错(7):
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwx------
& & at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
& & at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:187)
& & at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:150)
& & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5433)
& & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5415)
& & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:5371)
& & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1462)
& & at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1443)
& & at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:536)
& & at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:368)
& & at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
& & at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
& & at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
& & at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
& & at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
& & at java.security.AccessController.doPrivileged(Native Method)
& & at javax.security.auth.Subject.doAs(Subject.java:415)
& & at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
& & at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
10,运行报错(8):
报错如下:
10:16:09,632 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - fs.default.name is deprecated. Instead, use fs.defaultFS
10:16:11,597 WARN& [main] util.NativeCodeLoader (NativeCodeLoader.java:&clinit&(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Job start!
10:16:28,819 INFO& [main] client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at /192.168.52.128:8032
10:16:29,714 WARN& [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1551)) - PriviledgedActionException as:Administrator (auth:SIMPLE) cause:java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/Administrator/.staging is not as expected. It is owned by hadoop. The directory must be owned by the submitter Administrator or by Administrator
Exception in thread "main" java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/Administrator/.staging is not as expected. It is owned by hadoop. The directory must be owned by the submitter Administrator or by Administrator
& & at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:112)
& & at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
& & at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
& & at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
& & at java.security.AccessController.doPrivileged(Native Method)
& & at javax.security.auth.Subject.doAs(Unknown Source)
& & at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
& & at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
& & at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
& & at wc.WordCount.main(WordCount.java:147)
解决方法:
接着选择"本地用户和组",展开"用户",找到系统管理员"Administrator",修改其为"hadoop",操作结果如下图:
最后,把电脑进行"注销"或者"重启电脑",这样才能使管理员才能用这个名字。再次运行之后,显示正常,能连接到linux下的hadoop服务了,控制台信息如下显示:
11:01:07,009 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - fs.default.name is deprecated. Instead, use fs.defaultFS
11:01:12,938 WARN& [main] util.NativeCodeLoader (NativeCodeLoader.java:&clinit&(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Job start!
11:01:39,646 INFO& [main] client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at /192.168.52.128:8032
11:01:49,297 INFO& [main] mapreduce.JobSubmissionFiles (JobSubmissionFiles.java:getStagingDir(119)) - Permissions on staging directory /tmp/hadoop-yarn/staging/hadoop/.staging are incorrect: rwxrwxrwx. Fixing permissions to correct value rwx------
11:01:56,366 WARN& [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
11:02:14,657 INFO& [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
11:02:15,781 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
11:02:16,057 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - fs.default.name is deprecated. Instead, use fs.defaultFS
11:02:16,711 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_5_0001
11:02:20,493 INFO& [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(166)) - Submitted application application_5_0001
11:02:21,353 INFO& [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://name01:8088/proxy/application_5_0001/
11:02:21,393 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_5_0001
11:02:45,306 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_5_0001 running in uber mode : false
11:02:45,392 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -& map 0% reduce 0%
11:02:45,543 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) - Job job_5_0001 failed with state FAILED due to: Application application_5_0001 failed 2 times due to AM Container for appattempt_5_ exited with& exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control
org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control
& & at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
& & at org.apache.hadoop.util.Shell.run(Shell.java:418)
& & at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
& & at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
& & at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
& & at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
& & at java.util.concurrent.FutureTask.run(FutureTask.java:262)
& & at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
& & at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
& & at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
11:02:45,955 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 0
11,运行报错(9):
15:31:45,980 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - session.id is deprecated. Instead, use dfs.metrics.session-id
15:31:45,986 INFO& [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
15:31:46,213 WARN& [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1551)) - PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.52.128:9000/data/output already exists
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.52.128:9000/data/output already exists
& & at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
& & at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
删除原来的/data/output目录
12,运行报错(10):
Could not locate executable null\bin\winutils.exe in the Hadoop binaries
老掉牙的问题了,系统变量未设置HADOOP_HOME,系统变量设置HADOOP_HOME,或者直接加一句代码指定路径地址:
& & & & System.setProperty("hadoop.home.dir", "E:/hadoop/hadoop-2.3.0");
13,运行报错(11):
14:28:58,589 WARN& [main] util.NativeCodeLoader (NativeCodeLoader.java:&clinit&(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14:29:08,664 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - session.id is deprecated. Instead, use dfs.metrics.session-id
14:29:08,665 INFO& [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
14:29:10,026 INFO& [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
14:29:11,164 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
14:29:11,761 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local_0001
14:29:11,810 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.& Ignoring.
14:29:11,811 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.& Ignoring.
14:29:11,916 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(441)) - Cleaning up the staging area file:/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_local_0001
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/SI)Z
& & at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
& & at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:560)
& & at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
& & at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:177)
& & at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:164)
& & at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:98)
& & at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
& & at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
& & at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
& & at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
& & at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
& & at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
& & at org.apache.hadoop.mapred.LocalJobRunner$Job.&init&(LocalJobRunner.java:163)
& & at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
& & at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
& & at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
& & at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
& & at java.security.AccessController.doPrivileged(Native Method)
& & at javax.security.auth.Subject.doAs(Unknown Source)
& & at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
& & at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
& & at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
& & at wc.W2.main(W2.java:111)
缺乏hadoop.dll,下载hadoop.dll放到hadoop/bin目录下即可,但是之后运行依然报错,还需要手动设置下hadoop在windows下的运行路径,于是在Eclipse运行环境中,在运行的WordCount.java中,右键点击在下拉菜单栏里面选择Run Configurations,然后加上path的设置,Run顺利通过。参数如下图所示:
之后调试通过,运行结果如下:
15:34:01,303 INFO& [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - session.id is deprecated. Instead, use dfs.metrics.session-id
15:34:01,309 INFO& [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
15:34:02,047 INFO& [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
15:34:02,120 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
15:34:02,323 INFO& [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local_0001
15:34:02,367 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.& Ignoring.
15:34:02,368 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_local_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.& Ignoring.
15:34:02,682 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local_0001/job_local_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.& Ignoring.
15:34:02,682 WARN& [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local_0001/job_local_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.& Ignoring.
15:34:02,703 INFO& [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/
15:34:02,704 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local_0001
15:34:02,707 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
15:34:02,719 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15:34:02,853 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
15:34:02,857 INFO& [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local_0001_m_
15:34:02,919 INFO& [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
15:34:03,281 INFO& [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(581)) -& Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2e1022ec
15:34:03,287 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) - Processing split: hdfs://192.168.52.128:9000/data/input/README.txt:0+1366
15:34:03,304 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15:34:03,340 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1181)) - (EQUATOR) 0 kvi 857584)
15:34:03,341 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 100
15:34:03,341 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at
15:34:03,341 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid =
15:34:03,341 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = ; length = 6553600
15:34:03,708 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local_0001 running in uber mode : false
15:34:03,710 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -& map 0% reduce 0%
15:34:04,121 INFO& [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) -
15:34:04,128 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1435)) - Starting flush of map output
15:34:04,128 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1453)) - Spilling map output
15:34:04,128 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1454)) - bufstart = 0; bufend = 2055; bufvoid =
15:34:04,128 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - kvstart = 857584); kvend = 854736); length = 713/6553600
15:34:04,179 INFO& [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1639)) - Finished spill 0
15:34:04,194 INFO& [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(995)) - Task:attempt_local_0001_m_ is done. And is in the process of committing
15:34:04,207 INFO& [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
15:34:04,208 INFO& [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1115)) - Task 'attempt_local_0001_m_' done.
15:34:04,208 INFO& [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local_0001_m_
15:34:04,208 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
15:34:04,211 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
15:34:04,211 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local_0001_r_
15:34:04,221 INFO& [pool-6-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
15:34:04,478 INFO& [pool-6-thread-1] mapred.Task (Task.java:initialize(581)) -& Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@
15:34:04,483 INFO& [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@e2b02a3
15:34:04,500 INFO& [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:&init&(193)) - MergerManager: memoryLimit=, maxSingleShuffleLimit=, mergeThreshold=, ioSortFactor=10, memToMemMergeOutputsThreshold=10
15:34:04,503 INFO& [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local_0001_r_ Thread started: EventFetcher for fetching Map Completion Events
15:34:04,543 INFO& [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) - localfetcher#1 about to shuffle output of map attempt_local_0001_m_ decomp: 1832 len: 1836 to MEMORY
15:34:04,548 INFO& [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 1832 bytes from map-output for attempt_local_0001_m_
15:34:04,553 INFO& [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) - closeInMemoryFile -& map-output of size: 1832, inMemoryMapOutputs.size() -& 1, commitMemory -& 0, usedMemory -&1832
15:34:04,564 INFO& [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
15:34:04,566 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
15:34:04,566 INFO& [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
15:34:04,585 INFO& [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
15:34:04,585 INFO& [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
15:34:04,605 INFO& [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) - Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit
15:34:04,605 INFO& [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 1 files, 1836 bytes from disk
15:34:04,606 INFO& [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce
15:34:04,607 INFO& [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
15:34:04,608 INFO& [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
15:34:04,608 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
15:34:04,643 INFO& [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
15:34:04,714 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -& map 100% reduce 0%
15:34:04,842 INFO& [pool-6-thread-1] mapred.Task (Task.java:done(995)) - Task:attempt_local_0001_r_ is done. And is in the process of committing
15:34:04,850 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
15:34:04,850 INFO& [pool-6-thread-1] mapred.Task (Task.java:commit(1156)) - Task attempt_local_0001_r_ is allowed to commit now
15:34:04,881 INFO& [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local_0001_r_' to hdfs://192.168.52.128:9000/data/output/_temporary/0/task_local_0001_r_000000
15:34:04,884 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce & reduce
15:34:04,884 INFO& [pool-6-thread-1] mapred.Task (Task.java:sendDone(1115)) - Task 'attempt_local_0001_r_' done.
15:34:04,885 INFO& [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local_0001_r_
15:34:04,885 INFO& [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
15:34:05,714 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -& map 100% reduce 100%
15:34:05,714 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local_0001 completed successfully
15:34:05,733 INFO& [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 38
& & File System Counters
& & & & FILE: Number of bytes read=34542
& & & & FILE: Number of bytes written=470650
& & & & FILE: Number of read operations=0
& & & & FILE: Number of large read operations=0
& & & & FILE: Number of write operations=0
& & & & HDFS: Number of bytes read=2732
& & & & HDFS: Number of bytes written=1306
& & & & HDFS: Number of read operations=15
& & & & HDFS: Number of large read operations=0
& & & & HDFS: Number of write operations=4
& & Map-Reduce Framework
& & & & Map input records=31
& & & & Map output records=179
& & & & Map output bytes=2055
& & & & Map output materialized bytes=1836
& & & & Input split bytes=113
& & & & Combine input records=179
& & & & Combine output records=131
& & & & Reduce input groups=131
& & & & Reduce shuffle bytes=1836
& & & & Reduce input records=131
& & & & Reduce output records=131
& & & & Spilled Records=262
& & & & Shuffled Maps =1
& & & & Failed Shuffles=0
& & & & Merged Map outputs=1
& & & & GC time elapsed (ms)=13
& & & & CPU time spent (ms)=0
& & & & Physical memory (bytes) snapshot=0
& & & & Virtual memory (bytes) snapshot=0
& & & & Total committed heap usage (bytes)=
& & Shuffle Errors
& & & & BAD_ID=0
& & & & CONNECTION=0
& & & & IO_ERROR=0
& & & & WRONG_LENGTH=0
& & & & WRONG_MAP=0
& & & & WRONG_REDUCE=0
& & File Input Format Counters
& & & & Bytes Read=1366
& & File Output Format Counters
& & & & Bytes Written=1306
Hadoop2.5.2 新特性&
安装和配置Hadoop2.2.0&
13.04上搭建Hadoop环境
Ubuntu 12.10 +Hadoop 1.2.1版本集群配置
Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)
Ubuntu下Hadoop环境的配置
单机版搭建Hadoop环境图文教程详解
搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)
本文永久更新链接地址:
相关资讯 & & &
& (07月17日)
& (03月24日)
& (03月11日)
& (06月05日)
& (03月13日)
& (02月12日)
图片资讯 & & &
   同意评论声明
   发表
尊重网上道德,遵守中华人民共和国的各项有关法律法规
承担一切因您的行为而直接或间接导致的民事或刑事法律责任
本站管理人员有权保留或删除其管辖留言中的任意内容
本站有权在网站内转载或引用您的评论
参与本评论即表明您已经阅读并接受上述条款

我要回帖

更多关于 hadoop 查看mapreduce 的文章

 

随机推荐