hbase0.99.2 是否支持jdk1.6 32位下载

jdk1.6对webservice的支持示例 -
- 博客频道 - CSDN.NET
rem auto.bat 脚本
javac HelloService.java
start java HelloService
wsimport -d . http://localhost:7070/Ebay?wsdl
wsimport -s . -d . http://localhost:7070/Ebay?wsdl
javac -d . Main.java
java localhost._7070.ebay.Main
// HelloService.java
import javax.xml.ws.*;
import javax.jws.*;
import javax.jws.soap.*;
@WebService(targetNamespace=&http://localhost:7070/Ebay&)
@SOAPBinding(style=SOAPBinding.Style.RPC)
public class HelloService{
public static void main(String [] args){
Endpoint.publish(&http://localhost:7070/Ebay&, new HelloService());
@WebMethod
public void sayHello(){
System.out.println(&hello&);
//Main.java
package localhost._7070.
public class Main{
public static void main(String [] args){
HelloServiceService hss = new HelloServiceService();
HelloService hs = hss.getHelloServicePort();
hs.sayHello();
以上为下文的3个文件,在读下面前请把上面3个文件放在同一个目录下,然后点击auto.bat文件,然后查看结果。
然后在看下文你会更加清楚了!
首先你的JDK为1.6版本以上
1.写一个HelloService.java的类文件代码如下(这里放到了E:/wsclient目录下):
Helloservice代码&
import javax.xml.ws.*;
import javax.jws.*;
import javax.jws.soap.*;
@WebService(targetNamespace=&http://localhost:7070/Ebay&)
@SOAPBinding(style=SOAPBinding.Style.RPC)
public class HelloService{
public static void main(String [] args){
Endpoint.publish(&http://localhost:7070/Ebay&, new HelloService());
@WebMethod
public void sayHello(){
System.out.println(&hello&);
2.到命令行下进行编译然后运行:
E:/wsclient&javac HelloService.java
E:/wsclient&java HelloService
3.不要结束程序,打开浏览器地址栏中输入:
回车访问当看到下面的内容时,服务发布成功:
&&&?xml version=&1.0& encoding=&UTF-8&&?&
&&definitions&xmlns=&http://schemas.xmlsoap.org/wsdl/&&xmlns:tns=&http://localhost:7070/Ebay&xmlns:xsd=&http://www.w3.org/2001/XMLSchema&&xmlns:soap=&http://schemas.xmlsoap.org/wsdl/soap/&targetNamespace=&http://localhost:7070/Ebay&&name=&HelloServiceService&&
&&&types&/&
&&&message&name=&sayHello&&/&
&&&message&name=&sayHelloResponse&&/&
&&portType&name=&HelloService&&
&&operation&name=&sayHello&&parameterOrder=&&&
&&&input&message=&tns:sayHello&&/&
&&&output&message=&tns:sayHelloResponse&&/&
&&&/operation&
&&&/portType&
&&binding&name=&HelloServicePortBinding&&type=&tns:HelloService&&
&&&soap:binding&style=&rpc&&transport=&http://schemas.xmlsoap.org/soap/http&&/&
&&operation&name=&sayHello&&
&&&soap:operation&soapAction=&&&/&
&&&soap:body&use=&literal&&namespace=&http://localhost:7070/Ebay&&/&
&&&/input&
&&&soap:body&use=&literal&&namespace=&http://localhost:7070/Ebay&&/&
&&&/output&
&&&/operation&
&&&/binding&
&&service&name=&HelloServiceService&&
&&port&name=&HelloServicePort&&binding=&tns:HelloServicePortBinding&&
&&&soap:address&location=&http://localhost:7070/Ebay&&/&
&&&/service&
&&&/definitions&
4.不要结束程序运行,再开一个命令窗口生成一个访问服务的客户端,命令窗口如下:
E:/wsclient&wsimport -d e:/wsclient&
回车运行此时会在e盘的wsclient目录下生成localhost文件夹:
进去目录结构为E:/wsclient/localhost/_7070/ebay:
在ebay目录下面有两个文件,分别是HelloService.class和HelloServiceService.class。
只是class文件不利于我们学习,我们再生成源代码,运行如下命令:
E:/wsclient&wsimport -s e:/wsclient -d e:/wsclient&wsdl
回车运行,此时会在ebay目录下生成HelloService.java和HelloServiceService.java
打开看一下会发现HelloService.java是一个接口,要通过客户端调用,我们应该能得到一个代理,
打开HelloServiceService.java看一下,里面有一个getHelloServicePort()的方法返回的就是一个HelloS
这个时候我们就可以写一个客户端进行调用了.
5.写一个客户端调用的类Main.java(这里文件放在E:/wsclient,然后编译将class生成到ebay目录下),代码如下:
Main.java代码&
package localhost._7070.
public class Main{
public static void main(String [] args){
HelloServiceService hss = new HelloServiceService();
HelloService hs = hss.getHelloServicePort();
hs.sayHello();
运行命令:E:/wsclient&javac -d . Main.java
它会将class文件自动生成到相应的包中
再用如下命令运行程序:
E:/wsclient&java localhost._7070.ebay.Main
回车运行,这个时候在当前命令窗口什么也没看见,因为程序运行在服务端。切换到另一个开启服务的命令窗口:
窗口显示如下:
E:/wsclient&java HelloService
hello已经在服务端打印出来了。到此结束!
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:151969次
积分:2341
积分:2341
排名:第7637名
原创:74篇
转载:20篇
评论:64条
(1)(3)(1)(2)(1)(2)(3)(5)(4)(5)(5)(2)(12)(6)(5)(12)(7)(5)(6)(5)(1)(1)大数据环境搭建(CentOS-7 Hadoop 2.6.0 Hbase 0.99.2) - 下载频道
- CSDN.NET
&&&&大数据环境搭建(CentOS-7 Hadoop 2.6.0 Hbase 0.99.2)
大数据环境搭建(CentOS-7 Hadoop 2.6.0 Hbase 0.99.2)
基于CentOS-7 + Hadoop 2.6.0 + Hbase 0.99.2三者的大数据处理环境。
若举报审核通过,可奖励20下载分
被举报人:
zjfjifei2008
举报的资源分:
请选择类型
资源无法下载
资源无法使用
标题与实际内容不符
含有危害国家安全内容
含有反动色情等内容
含广告内容
版权问题,侵犯个人或公司的版权
*详细原因:
您可能还需要
服务器应用下载排行Jsp显示HBase的数据 - 鲍礼彬的CSDN博客
- 博客频道 - CSDN.NET
Jsp显示的数据
Jdk1.7、、安装好的、、
1、建一个普通的动态程序,用导包运行,不用和。
2、把和的相应的包导进工程中;
主要是运行,把指定表名和行键的内容读出来。
并添加&文件。
3、创建一个类,并创建文件,把的里的包拷进的里面去。
项目目录结构:
Output_HBase.java:
import java.io.IOE
import model.A
import org.apache.hadoop.conf.C
import org.apache.hadoop.hbase.KeyV
import org.apache.hadoop.hbase.client.G
import org.apache.hadoop.hbase.client.HBaseA
import org.apache.hadoop.hbase.client.HT
import org.apache.hadoop.hbase.client.HTableI
import org.apache.hadoop.hbase.client.HTableP
import org.apache.hadoop.hbase.client.R
import org.apache.hadoop.hbase.client.ResultS
import org.apache.hadoop.hbase.client.S
@SuppressWarnings(&deprecation&)
public class Output_HBase {
HBaseAdmin admin=
Configuration conf=
* 构造函数加载配置
public Output_HBase(){
conf = new Configuration();
conf.set(&hbase.zookeeper.quorum&, &192.168.1.200:2181&);
conf.set(&hbase.rootdir&, &hdfs://192.168.1.200:9000/hbase&);
System.out.println(&初始化完毕&);
admin = new HBaseAdmin(conf);
} catch (IOException e) {
e.printStackTrace();
public static void main(String[] args) {
Output_HBase o=new Output_HBase();
o.get(&article&, &1&);
get(String tableName, String row) {
System.out.println(&get执行了1&);
@SuppressWarnings(&resource&)
HTablePool hTablePool = new HTablePool(conf, 1000);
HTableInterface table = hTablePool.getTable(tableName);
System.out.println(&get执行了2&);
Get get = new Get(row.getBytes());
System.out.println(&get执行了3&);
Article article =
System.out.println(&get执行了4&);
Result result = table.get(get);
System.out.println(&get执行了5&);
KeyValue[] raw = result.raw();
System.out.println(&get执行了6&);
if (raw.length == 5) {
System.out.println(&get执行了7&);
article = new Article();
article.setId(new String(raw[3].getValue()));
article.setTitle(new String(raw[4].getValue()));
article.setAuthor(new String(raw[0].getValue()));
article.setDescribe(new String(raw[2].getValue()));
article.setContent(new String(raw[1].getValue()));
//new Start(article.getId(), article.getTitle(), article.getAuthor(), article.getDescribe(), article.getContent());
System.out.println(&执行了啊--ID&+article.getId()+&\n&);
System.out.println(&执行了啊--标题&+article.getTitle()+&\n&);
System.out.println(&执行了啊--作者&+article.getAuthor()+&\n&);
System.out.println(&执行了啊--描述&+article.getDescribe()+&\n&);
System.out.println(&执行了啊--正文&+article.getContent()+&\n&);
} catch (IOException e) {
e.printStackTrace();
* 获取表的所有数据
* @param tableName
public void getALLData(String tableName) {
@SuppressWarnings(&resource&)
HTable hTable = new HTable(conf, tableName);
Scan scan = new Scan();
ResultScanner scanner = hTable.getScanner(scan);
for (Result result : scanner) {
if(result.raw().length==0){
System.out.println(tableName+& 表数据为空!&);
for (KeyValue kv: result.raw()){
System.out.println(new String(kv.getKey())+&\t&+new String(kv.getValue()));
} catch (IOException e) {
e.printStackTrace();
&OutPrx.java:
import model.A
public class OutPrx {
public OutPrx() {
public void get(){
System.out.println(&这里这行了1&);
Output_HBase out1=new Output_HBase();
System.out.println(&这里这行了2&);
Article article=out1.get(&article&, &520&);
System.out.println(&这里这行了3&);
this.id=article.getId();
this.title=article.getTitle();
this.author=article.getAuthor();
this.describe=article.getDescribe();
this.content=article.getContent();
System.out.println(&这里这行了4&);
public String getId() {
public void setId(String id) {
public String getTitle() {
public void setTitle(String title) {
this.title =
public String getAuthor() {
public void setAuthor(String author) {
this.author =
public String getDescribe() {
public void setDescribe(String describe) {
this.describe =
public String getContent() {
public void setContent(String content) {
this.content =
&Article:
public class Article {
public Article(){
public Article(String id,String title,String describe,String content,String author){
this.title=
this.describe=
this.content=
this.author=
public String getId() {
public void setId(String id) {
public String getTitle() {
public void setTitle(String title) {
this.title =
public String getDescribe() {
public void setDescribe(String describe) {
this.describe =
public String getContent() {
public void setContent(String content) {
this.content =
public String getAuthor() {
public void setAuthor(String author) {
this.author =
public String toString(){
return this.id+&\t&+this.title+&\t&+this.author+&\t&+this.describe+&\t&+this.
(这个类跟显示无关,可以忽略)
Import_HBase:
import java.io.IOE
import org.apache.hadoop.conf.C
import org.apache.hadoop.hbase.HBaseC
import org.apache.hadoop.hbase.client.M
import org.apache.hadoop.hbase.client.P
import org.apache.hadoop.hbase.mapreduce.TableMapReduceU
import org.apache.hadoop.hbase.mapreduce.TableOutputF
import org.apache.hadoop.hbase.mapreduce.TableR
import org.apache.hadoop.io.LongW
import org.apache.hadoop.io.NullW
import org.apache.hadoop.io.T
import org.apache.hadoop.mapreduce.J
import org.apache.hadoop.mapreduce.M
import org.apache.hadoop.mapreduce.R
import org.apache.hadoop.mapreduce.lib.input.FileInputF
import org.apache.hadoop.mapreduce.lib.input.TextInputF
public class Import_HBase {
public static class MyMapper extends Mapper&LongWritable, Text, LongWritable, Text&{
protected void map(LongWritable key, Text value,
Mapper&LongWritable, Text, LongWritable, Text&.Context context)
throws IOException, InterruptedException {
//设置行键+内容
context.write(key, value);
public static class MyReduce extends TableReducer&LongWritable, Text, NullWritable&{
private String family=&info&;
protected void reduce(LongWritable arg0, Iterable&Text& v2s,
Reducer&LongWritable, Text, NullWritable, Mutation&.Context context)
throws IOException, InterruptedException {
for (Text value : v2s) {
String line=value.toString();
String[] splited=line.split(&\t&);
String rowkey=splited[0];
Put put = new Put(rowkey.getBytes());
put.add(family.getBytes(), &id&.getBytes(), splited[0].getBytes());
put.add(family.getBytes(), &title&.getBytes(), splited[1].getBytes());
put.add(family.getBytes(), &author&.getBytes(), splited[2].getBytes());
put.add(family.getBytes(), &describe&.getBytes(), splited[3].getBytes());
put.add(family.getBytes(), &content&.getBytes(), splited[4].getBytes());
context.write(NullWritable.get(), put);
private static String tableName=&article&;
@SuppressWarnings(&deprecation&)
public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
conf.set(&hbase.rootdir&, &hdfs://192.168.1.200:9000/hbase&);
conf.set(&hbase.zookeeper.quorum&, &192.168.1.200:2181&);
conf.set(TableOutputFormat.OUTPUT_TABLE, tableName);
Job job = new Job(conf, Import_HBase.class.getSimpleName());
TableMapReduceUtil.addDependencyJars(job);
job.setJarByClass(Import_HBase.class);
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReduce.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TableOutputFormat.class);
FileInputFormat.setInputPaths(job, &hdfs://192.168.1.200:9000/hbase_solr&);
job.waitForCompletion(true);
HttpServlet:
import java.io.IOE
import javax.servlet.ServletE
import javax.servlet.http.HttpS
import javax.servlet.http.HttpServletR
import javax.servlet.http.HttpServletR
import javax.servlet.annotation.WebS
import control.OutP
* Servlet implementation class Test
@WebServlet(&/Test&)
public class Test extends HttpServlet {
private static final long serialVersionUID = 1L;
* @see HttpServlet#HttpServlet()
public Test() {
// TODO Auto-generated constructor stub
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse response)
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
OutPrx oo=new OutPrx();
request.setAttribute(&id&,oo.getId());//存值
request.setAttribute(&title&,oo.getTitle());//存值
request.setAttribute(&author&,oo.getAuthor());//存值
request.setAttribute(&describe&,oo.getDescribe());//存值
request.setAttribute(&content&,oo.getContent());//存值
System.out.println(&====================================================================================&);
System.out.println(&执行了啊--ID&+oo.getId()+&\n&);
System.out.println(&执行了啊--标题&+oo.getTitle()+&\n&);
System.out.println(&执行了啊--作者&+oo.getAuthor()+&\n&);
System.out.println(&执行了啊--描述&+oo.getDescribe()+&\n&);
System.out.println(&执行了啊--正文&+oo.getContent()+&\n&);
request.getRequestDispatcher(&/hello.jsp&).forward(request,response);
System.out.println(&-----------------------------------------------------------------------------&);
* @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse response)
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
log4j.properties:
### set log levels - for more verbose logging change 'info' to 'debug' ###
log4j.rootLogger=DEBUG,stdout,file
## Disable other log
#log4j.logger.org.springframework=OFF
#log4j.logger.org.apache.struts2=OFF
#.opensymphony.xwork2=OFF
#.ibatis=OFF
#log4j.logger.org.hibernate=OFF
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### direct messages to file mylog.log ###
log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.File=logs/spider_web.log
log4j.appender.file.DatePattern = '.'yyyy-MM-dd
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### direct messages to file mylog.log ###
.superwu.crm.service=INFO, ServerDailyRollingFile
log4j.appender.ServerDailyRollingFile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ServerDailyRollingFile.File=logs/biapp-service.log
log4j.appender.ServerDailyRollingFile.DatePattern='.'yyyy-MM-dd
log4j.appender.ServerDailyRollingFile.layout=org.apache.log4j.PatternLayout
log4j.appender.ServerDailyRollingFile.layout.ConversionPattern=%d{yyy-MM-dd HH:mm:ss } -[%r]-[%p] %m%n
#.superwu.crm.service.DrmService=INFO, ServerDailyRollingFile
#log4j.appender.drm=org.apache.log4j.RollingFileAppender
#log4j.appender.drm.File=logs/crm-drm.log
#log4j.appender.drm.MaxFileSize=10000KB
#log4j.appender.drm.MaxBackupIndex=10
#log4j.appender.drm.Append=true
#log4j.appender.drm.layout=org.apache.log4j.PatternLayout
#log4j.appender.drm.layout.ConversionPattern=[start]%d{yyyy/MM/dd/ HH:mm:ss}[DATE]%n%p[PRIORITY]%n%x[NDC]%n%t[THREAD]%n%c[CATEGORY]%n%m[MESSAGE]%n%n
#log4j.appender.drm.layout.ConversionPattern=[%5p]%d{yyyy-MM-dd HH:mm:ss}[%c](%F:%L)%n%m%n%n
hello.jsp:
&pre name=&code& class=&java&&&%@ page language=&java& contentType=&text/ charset=UTF-8&
pageEncoding=&UTF-8&%&
&!DOCTYPE html PUBLIC &-//W3C//DTD HTML 4.01 Transitional//EN& &http://www.w3.org/TR/html4/loose.dtd&&
&meta http-equiv=&Content-Type& content=&text/ charset=UTF-8&&
&title&Insert title here&/title&
&% String id = (String)request.getAttribute(&id&);%&
&% String title = (String)request.getAttribute(&title&);%&
&% String author = (String)request.getAttribute(&author&);%&
&% String describe = (String)request.getAttribute(&describe&);%&
&% String content = (String)request.getAttribute(&content&);%&
&%=&文章ID为:&+id %& &br&&br&
&%=&文章标题为:&+title %& &br&&br&
&%=&文章作者为:&+author %& &br&&br&
&%=&文章描述为:&+describe %& &br&&br&
&%=&文章正文为:&+content %& &br&&br&
右键servlet类运行就可以了,运行界面如下:
Eclipse&显示的整个界面:
Eclipse显示的界面:
Eclipse显示的项目工程界面:
在网页显示的界面:
查看HBase的数据:
查看HBase表结构:
运行Web程序之前,必须确保和是开启的:
如果加载有错误,可以重新创建一个项目。
例如:@WebServlet(&/Test&)
还有显示提示找不到什么类的等错误,先自己重新创建个项目运行。
创建了还不好使,在具体情况具体分析。
这个纯属自己自娱自乐,当然用maven更好了,用springmvc更好了。
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:44850次
积分:2781
积分:2781
排名:第5907名
原创:214篇
转载:133篇
评论:11条
说明:小青年在奋斗中
&&&&&&一个爱学习的小菜鸟。。。
专业:养猪
年级:小学生
所在地:村头干活那小伙就是我~
姓名:鲍礼彬
hadoop菜鸟QQ群:
279-807-394
点击图片可以与我QQ交谈
阅读:1437
文章:27篇
阅读:9323
(2)(53)(37)(35)(84)(61)(9)(9)(15)(15)(12)(9)(6)hbase完全分布式安装与配置
1.环境说明
hbase 完全分布式安装,2个节点,分别为:
192.168.1.67 MasterServer (作为hbase的master和regionserver节点[可选])
192.168.1.241 SlaveServer (作为hbase的regionserver节点)
2.先决条件
安装hadoop,具体安装见:http://blog.csdn.net/hwwn2009/article/details/
安装zookeeper,因为要使用独立的zookeeper集群,具体安装见:http://blog.csdn.net/hwwn2009/article/details/
3.安装Hbase
1),注意要与hadoop版本兼容,且选择稳定版较好
wget http://mirrors./apache/hbase/hbase-0.98.5/hbase-0.98.5-hadoop2-bin.tar.gz
tar -zxvf hbase-0.98.5-hadoop2-bin.tar.gz
3)修改conf/hbase-site.xml文件&
&property&
&name&hbase.rootdir&/name&
&value&hdfs://MasterServer:9000/hbase&/value&
&/property&
&property&
&name&hbase.cluster.distributed&/name&
&value&true&/value&
&/property&
&property&
&name&hbase.master&/name&
&value&hdfs://MasterServer:60000&/value&
&/property&
&property&
&name&hbase.zookeeper.quorum&/name&
&value&MasterServer,SlaveServer&/value&
&/property&
4)修改conf/regionservers文件&
MasterServer
SlaveServer
注:如果不想将MasterServer作为HRegionServer,就去掉MasterServer
5)修改conf/hbase-env.sh文件
export JAVA_HOME=/usr/lib/jvm/jdk1.6/jdk1.6.0_27
export HBASE_MANAGES_ZK=false
#启动指定的ZooKeeper,而非自带的ZooKeeper。
export HBASE_HOME=/home/hadooper/hadoop/hbase-0.98.5
export HADOOP_HOME=/home/hadooper/hadoop/hadoop-2.5.1
4.将配置好的hbase复制到其它节点
scp -r hbase-0.98.5 hadooper@SlaveServer:~/hadoop/
5.修改各节点Hadoop的hdfs-site.xml文件
&property&
&name&dfs.datanode.max.xcievers&/name&
&value&4096&/value&
&/property&
注:该参数限制了datanode所允许同时执行的发送和接受任务的数量,缺省为256。
启动顺序是:Hadoop-&zookeeper-&hbase;停止顺序:hbase-&zookeeper-&hadoop。
1)启动hbase
bin/start-hbase.sh
2)jps查看进程
①主节点MasterServer
8428 JobHistoryServer
4048 QuorumPeerMain
15482 HMaster
30357 NameNode
15632 HRegionServer
30717 ResourceManager
30563 SecondaryNameNode
②从节点SlaveServer
9340 QuorumPeerMain
11991 HRegionServer
19375 DataNode
19491 NodeManager
3)进入hbase shell
bin/hbase shell
HBase S enter 'help&RETURN&' for list of supported commands.
Type &exit&RETURN&& to leave the HBase Shell
Version 0.98.5-hadoop2, rUnknown, Mon Aug
4 23:58:06 PDT 2014
4)查看集群状态
hbase(main):001:0& status
2 servers, 0 dead, 1.5000 average load
5)建表测试
hbase(main):002:0& create 'test','id'
0 row(s) in 1.3530 seconds
=& Hbase::Table - test
hbase(main):003:0& list
2 row(s) in 0.0430 seconds
=& [&member&, &test&]
6)网页查看集群状态
如果以上都没问题,恭喜,安装配置完成。
转载请注明:http://blog.csdn.net/hwwn2009/article/details/
7.遇到的问题
1)Q:从节点jps时,看不到HRegionServer,且status时,显示只有一个servers:1 servers。即从节点的hbase没有启动起来。
A:查看log日志
14:29:38,147 WARN
[regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase/rs/SlaveServer,5376898 already deleted, retry=false
14:29:38,147 WARN
[regionserver60020] regionserver.HRegionServer: Failed deleting my ephemeral node
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/rs/SlaveServer,5376898
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:156)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1273)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1262)
at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1298)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1012)
at java.lang.Thread.run(Thread.java:662)
14:29:38,158 INFO
[regionserver60020] zookeeper.ZooKeeper: Session: 0xcfd0014 closed
14:29:38,158 INFO
[regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
14:29:38,158 INFO
[regionserver60020] regionserver.HRegionServer:
zookeeper connection closed.
14:29:38,158 INFO
[regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
14:29:38,158 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2422)
14:29:38,160 INFO
[Thread-9] regionserver.ShutdownHook: Sh hbase.shutdown.hook= fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@8d5aad
14:29:38,160 INFO
[Thread-9] regionserver.ShutdownHook: Starting fs shutdown hook thread.
14:29:38,160 INFO
[Thread-9] regionserver.ShutdownHook: Shutdown hook finished.
由于集群时间未同步,造成从节点没能启动。
在每个节点上运行ntp即可
ntpdate asia.pool.ntp.org
也可永久改变,见:/article/48206aeae2eb334.
2)Q:进入hbase shell后,提示
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
A:发生jar包冲突,删除hbase中的即可
rm lib/slf4j-log4j12-1.6.4.jar
转载请注明:http://blog.csdn.net/hwwn2009/article/details/
您对本文章有什么意见或着疑问吗?请到您的关注和建议是我们前行的参考和动力&&
您的浏览器不支持嵌入式框架,或者当前配置为不显示嵌入式框架。

我要回帖

更多关于 jdk1.6 的文章

 

随机推荐