spark 运行jar包写入impala中需要什么jar包

51CTO旗下网站
大数据技术的对决――Spark对Impala对Hive对Presto
在大数据浪潮全面来袭的历史背景下,我们一直面临着同一类难题的困扰――该选择哪款工具解决相关问题?这项挑战在大数据SQL引擎领域同样存在。
作者:佚名来源:| 16:06
在大数据浪潮全面来袭的历史背景下,我们一直面临着同一类难题的困扰&&该选择哪款工具解决相关问题?这项挑战在大数据SQL引擎领域同样存在。作为大数据报告工具开发商,AtScale公司通过基准测试为我们带来了如下答案:
1. Spark 2.0在大规模查询性能方面可达1.6版本的2.4倍。二者的小规模查询性能基本持平。
Spark 2.0 improved its large query performance by an average of 2.4X over
Spark 1.6 (so upgrade!). Small query performance was already good and remained
roughly the same.
2. Impala 2.6版本在大规模查询性能可达2.3版本的2.8倍,小规模查询基本持平。
Impala 2.6 is 2.8X as fast for large queries as version 2.3. Small query
performance was already good and remained roughly the same.
3. Hive 2.1配合LLAP在大规模查询场景下可实现1.2版本性能的3.4倍,小规模查询性能则为2倍。
Hive 2.1 with LLAP is over 3.4X faster than 1.2, and its small query
performance doubled. If you're using Hive, this isn't an upgrade you can afford
to skip.【编辑推荐】【责任编辑: TEL:(010)】
大家都在看猜你喜欢
头条聚焦头条热点热点
24H热文一周话题本月最赞
讲师:96332人学习过
讲师:112105人学习过
讲师:125343人学习过
精选博文论坛热帖下载排行
本书内容包括:
● 框架的总览:SQL Server 2005的功能是如何集成在一起的,以及这些功能对于用户的意义。
● 安全性管理、策略...
订阅51CTO邮刊Spark SQL操作详细讲解-codexiu.cn
Spark SQL操作详细讲解
Spark SQL操作详细讲解
一. Spark SQL和SchemaRDD
关于Spark SQL的前生就不再多说了,我们只关注它的操作。但是,首先要搞明白一个问题,那就是究竟什么是SchemaRDD呢?从Spark的Scala API可以知道org.apache.spark.sql.SchemaRDD和class SchemaRDD extends RDD[Row] with SchemaRDDLike,我们可以看到类SchemaRDD继承自抽象类RDD。官方文档的定义是"An RDD of Row objects that has an associated schema. In addition to standard RDD functions, SchemaRDDs can be used in relational queries",直接翻译过来就是"SchemaRDD由行对象组成,行对象拥有一个模式来描述行中每一列的数据类型"。自己认为SchemaRDD就是Spark SQL提供的一种特殊的RDD,主要的目的就是为了SQL查询,因此,在操作的时候就需要把RDD等转换成为SchemaRDD。更加通俗一点,我们可以把SchemaRDD类比为传统关系型数据库中的一张表。
从上图中我们可以看出,Spark SQL可以处理Hive,JSON,Parquet(列式存储格式)等数据格式,也就是说SchemaRDD可以从这些数据格式中进行创建。我们可以通过JDBC/ODBC,Spark Application,Spark Shell等操作Spark SQL,将Spark SQL中的数据读取出来之后就可以通过数据挖掘,数据可视化(Tableau)等进行操作。
二. Spark SQL操作txt文件
首先要说明的是在Spark 1.3中及以后,SchemaRDD改为叫做DataFrame。学习过Python中Pandas类库的人应该对DataFrame非常的了解,直观一点来说,其实就是一张表格。不过,我们一般还把DataFrame叫做SchemaRDD,只是由于Spark API的改变导致Spark SQL的操作也会发生相应的变化。我们实验使用的是Spark 1.3.0版本。
在Spark 1.1.0中有2种方式可以将RDD转换成SchemaRDD,如下所示:
case class方式:通过case class,使用反射推断Schema;
applySchema方式:通过编程接口定义Schema,并应用到RDD上。
说明:前者适用于已知Schema的源数据(列已知),而后者适用于未知Schema的RDD上(列未知)。
通过命令hdfs dfs -mkdir /sparksql在HDFS中建立文件夹sparksql,然后通过命令hdfs dfs -put /root/Downloads/SparkData/people.txt /sparksql将文件people.txt上传到HDFS中的/sparksql目录中。(目录名称不能有空格,比如Spark SQL,则报put: unexpected URISyntaxException)。如下所示:
说明:通过http://localhost:50070/explorer.html#/sparksql查看文件people.txt。要充分地利用Hadoop和Spark中的Web页面对集群进行监控和调优,要养成这个好习惯。
1. case class方式
(1)创建SQLContext
根据SparkContext (sc)创建SQLContext,如下所示:
1 val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
第1行:sc指的是org.apache.spark.SparkContext,当我们运行spark shell时,内置对象sc已经创建,与Java Web中的内置对象比较类似。
第2行:把RDD隐式转换成为DataFrame(即SchemaRDD)。
(2)定义case class
我们定义case class如下所示:
1 case class Person(name: String, age: Int)
解析:通过反射来读取case class的参数名字,然后作为列的名字。case class可以嵌套或者包含复杂的数据类型,比如Sequences,Arrays等。
(3)创建DataFrame
创建DataFrame如下所示:
1 val rddPerson = sc.textFile("/sparksql/people.txt").map(_.split(",")).map(p=&Person(p(0), p(1).trim.toInt)).DF()
通过RDD的Transform过程,我们可以把case class隐式转化成为DataFrame(即addPerson)。
文件people.txt中的内容为Mechel, 29;Andy, 30;Jusdin, 19。(这样写是为了排版整齐,其实是每个&name, age&一行)
(4)注册成表
1 rddPerson.registerTempTable("rddTable")
解析:我们将rddPerson在sqlContext中注册成表rddTable。因为注册成表后就可以对表进行操作,比如select,insert,join等。
(5)查询操作
1 sqlContext.sql("select name from rddTable where age &= 13 and age &= 19").map(t =&"name: " + t(0)).collect().foreach(println)
解析:找出年龄在13-19岁之间的姓名。
case class方式的操作代码,如下所示:
1 import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.SQLContext
* Created by root on 10/24/15.
object CaseClass {
case class Person(name: String, age: Int)
def main (args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("CaseClass")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val rddpeople = sc.textFile("hdfs://ubuntu:9000/sparksql/people.txt").map(_.split(",")).map(p =& Person(p(0), p(1).trim.toInt)).toDF()
rddpeople.registerTempTable("rddTable")
sqlContext.sql("select name from rddTable where age &= 13 and age &= 19").map(t =& "Name: " + t(0)).collect().foreach(println)
rddpeople.saveAsParquetFile("/sparksql/people.parquet")
输出结果,如下所示:
总结:通过以上步骤,Spark SQL基本操作是首先创建sqlContext并且定义case class,然后通过RDD的Transform过程,把case class隐式转化成为DataFrame,最后将DataFrame在sqlContext中注册成表,我们就可以对表进行操作了。
2. applySchema方式
applySchema方式的操作代码,如下所示:
* Created by root on 10/24/15.
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.{SQLContext, Row};
import org.apache.spark.sql.types.{StructType, StructField, StringType};
object ApplySchema {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("ApplySchema")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
// create schema with match data structure
val schemaString = "name age"
val schema = StructType(schemaString.split(" ").map(fieldName =& StructField(fieldName, StringType, true)))
// create rowRDD
val rowRDD = sc.textFile("hdfs://ubuntu:9000/sparksql/people.txt").map(_.split(",")).map(p =& Row(p(0), p(1).trim))
// apply schema to rowRDD by applySchema
val rddpeople2 = sqlContext.applySchema(rowRDD, schema)
rddpeople2.registerTempTable("rddTable2")
sqlContext.sql("select name from rddTable2 where age &= 13 and age &= 19").map(t =& "Name: " + t(0)).collect().foreach(println)
输出结果,如下所示:
总结:applySchema方式的操作步骤:创建与rowRDD匹配的schema;从源RDD创建rowRDD;将schema通过applySchema应用到rowRDD。applySchema方式与case class方式相比还是比较复杂,不过编程的思路还是很清晰的。
三. Spark SQL操作parquet文件
什么是parquet呢?Parquet是一种面向列存储的文件格式,Cloudera的大数据在线分析(OLAP)项目Impala中使用该格式作为列存储。由于parquet文件保留了schema的信息,sqlContext读取parquet文件后直接转换为SchemaRDD,所以不需要使用case class来隐式转换为SchemaRDD,并且parquet与SchemaRDD可以相互转换。比如将SchemaRDD转换为parquet文件:rddpeople.saveAsParquetFile("/sparksql/people.parquet")。
说明:当把路径"/sparksql/people.parquet"变为"hdfs://ubuntu:9000/sparksql/people.parquet",即将people.parquet文件存储在HDFS文件系统上面,而非Linux文件系统上面时,报错[Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://ubuntu:9000/sparksql/people.parquet, expected: file:///]。这个错误没有解决,我猜测saveAsParquetFile()函数应该支持将文件直接写入HDFS文件系统中。
Spark SQL操作parquet文件代码,如下所示:
1 import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.SQLContext
* Created by root on 10/31/15.
object CaseClass {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("Parquet")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val parquetpeople = sqlContext.parquetFile("/sparksql/people.parquet")
parquetpeople.registerTempTable("parquetTable")
sqlContext.sql("select name from parquetTable where age &= 25").map(t =& "Name: " + t(0)).collect().foreach(println)
输出结果,如下所示:
说明:首先读取people.parquet文件,然后注册成表parquetTable,最后查询年纪大于25岁人的名字。
四. Spark SQL操作json文件
其中,XML文件在大数据中扮演者非常重要的参数配置作用,还有一种非常重要的文件格式就是JSON,比如大名鼎鼎的NoSQL数据库MongoDB。sqlContext可以从jsonFile或jsonRDD获取schema信息来构建SchemaRDD,注册成表后就可以使用。
Spark SQL操作json文件代码,如下所示:
1 import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.SQLContext
* Created by root on 10/31/15.
object CaseClass {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("JSON")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val jsonpeople = sqlContext.jsonFile("/root/Downloads/SparkData/people.json")
jsonpeople.registerTempTable("jsonTable")
sqlContext.sql("select name from jsonTable where age &= 25").map(t =& "Name: " + t(0)).collect().foreach(println)
输出结果,如下所示:
说明:首先读取people.json文件,然后注册成表jsonTable,最后查询年纪大于25岁人的名字。
五. Spark SQL操作JDBC
ThriftServer是一个JDBC/ODBC接口,用户可以通过JDBC/ODBC连接ThriftServer来访问Spark SQL的数据。Spark中有start-history-server.sh,Hive中有hive --service hiveserver和hive --service hiveserver2,HBase中有hbase-daemon start thrift。现在还不清楚Spark,Hive和HBase中的Thrift有何区别。
说明:hiveserver的驱动类名为org.apache.hadoop.hive.jdbc.HiveDriver,而hiveserver2的驱动类名为org.apache.hive.jdbc.HiveDriver。
1. beeline
首先执行命令hive --service hiveserver2,然后执行命令beeline和!connect jdbc:hive2://Master:10000。输入Hive元数据库(比如MySQL)的用户名和密码,我们就可以使用beeline对Hive进行操作了。需要说明的是beeline客户端相互之间可以共享数据。
2. JDBC/ODBC
ThriftServer对于开发人员来说是非常重要的,我们可以使用JDBC/ODBC来访问Spark SQL。如下所示:
1 import java.sql.DriverManager
object SQLJDBC {
def main (args: Array[String]) {
Class.forName("org.apache.hive.jdbc.HiveDriver")
val conn = DriverManager.getConnection("jdbc:hive2://Master:10000/saledata", "hive", "mysql")
val statement = conn.createStatement
val rs = statement.executeQuery("select ordernumber, amount from tblStockDetail where amount&3000")
while(rs.next) {
val ordernumber = rs.getString("ordernumber")
val amount = rs.getString("amount")
println("ordernumber = %s, amount = %s".format(ordernumber, amount))
case e: Exception =& e.printStackTrace
conn.close()
输出结果,如下所示:
1 ordernumber = GHSL, amount = 5025
ordernumber = GHSL, amount = 10989
ordernumber = HMJSL, amount = 4490
ordernumber = HMJSL, amount = 10776
ordernumber = HMJSL, amount = 5988
ordernumber = HMJSL, amount = 6578
ordernumber = HMJSL, amount = 3440
ordernumber = HMJSL, amount = 4011
ordernumber = HMJSL, amount = 4011
ordernumber = HMJSL, amount = 29620
ordernumber = HMJSL, amount = 12258
ordernumber = HMJSL, amount = 4018
ordernumber = HMJSL, amount = 3516
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3517
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3153
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3517
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3017
ordernumber = HMJSL, amount = 3017
ordernumber = HMJSL, amount = 4191
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 4191
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3017
ordernumber = HMJSL, amount = 3183
ordernumber = HMJSL, amount = 3183
ordernumber = HMJSL, amount = 4191
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3013
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 4057
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3353
ordernumber = HMJSL, amount = 3346
ordernumber = HMJSL, amount = 3588
ordernumber = HMJSL, amount = 3664
ordernumber = HMJSL, amount = 4464
ordernumber = HMJSL, amount = 3348
ordernumber = HMJSL, amount = 3504
ordernumber = HMJSL, amount = 3504
ordernumber = HMJSL, amount = 3664
ordernumber = HMJSL, amount = 3588
ordernumber = HMJSL, amount = 3588
ordernumber = HMJSL, amount = 3486
ordernumber = HMJSL, amount = 3766
ordernumber = HMJSL, amount = 3766
ordernumber = HMJSL, amount = 3190
ordernumber = HMJSL, amount = 3486
ordernumber = HMJSL, amount = 3386
ordernumber = HMJSL, amount = 3386
ordernumber = HMJSL, amount = 3386
ordernumber = HMJSL, amount = 3386
ordernumber = HMJSL, amount = 5733
ordernumber = HMJSL, amount = 5478
ordernumber = HMJSL, amount = 4186
ordernumber = HMJSL, amount = 3588
ordernumber = HMJSL, amount = 3833
ordernumber = HMJSL, amount = 3914
ordernumber = HMJSL, amount = 31200
ordernumber = HMJSL, amount = 18600
ordernumber = HMJSL, amount = 3196
ordernumber = HMJSL, amount = 3835
ordernumber = HMJSL, amount = 3160
ordernumber = HMJSL, amount = 15600
ordernumber = HMJSL, amount = 36852
ordernumber = HMJSL, amount = 5980
ordernumber = HMJSL, amount = 11960
ordernumber = HMJSL, amount = 3766
ordernumber = SSSL, amount = 4305
ordernumber = YZSL, amount = 3099
ordernumber = YZSL, amount = 4525
ordernumber = YZSL, amount = 6980
六. hiveContext详细讲解
我们首先在Hive中定义一个数据库saledata和三个表tblDate、tblStock、tblStockDetail,如下所示:
1 CREATE DATABASE SALEDATA;
use SALEDATA;
//Date.txt文件定义了日期的分类,将每天分别赋予所属的月份、星期、季度等属性
//日期,年月,年,月,日,周几,第几周,季度,旬、半月
CREATE TABLE tblDate(
dateID string,
theyearmonth string,
theyear string,
themonth string,
thedate string,
theweek string,
theweeks string,
thequot string,
thetenday string,
thehalfmonth string
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
//Stock.txt文件定义了订单表头
//订单号,交易位置,交易日期
CREATE TABLE tblStock(
ordernumber string,
locationid string,
dateID string
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
//StockDetail.txt文件定义了订单明细
//订单号,行号,货品,数量,金额
CREATE TABLE tblStockDetail(
ordernumber STRING,
rownum int,
itemid string,
price int,
amount int
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
//装载数据
LOAD DATA LOCAL INPATH '/home/ssw/sparksql/Date.txt' INTO TABLE tblD
LOAD DATA LOCAL INPATH '/home/ssw/sparksql/Stock.txt' INTO TABLE tblS
LOAD DATA LOCAL INPATH '/home/ssw/sparksql/StockDetail.txt' INTO TABLE tblStockD
需要说明的是Date.txt,Stock.txt和StockDetail.txt的数据集的链接地址为:
因为Hive中的数据都是存放在HDFS中的,因此通过http://master:50070/我们可以查看Hive中的表,如下所示:
比较奇怪的是Size,Replication,Block Size都为0,但是表中的内容确实不为空,可能是浏览器显示的问题吧。
我们要使用hiveContext,首先得构建hiveContext,启动spark-shell --master yarn-client,如下所示:
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
需要说明的是Spark context as sc,而SQL context available as sqlContext。如果要使用hiveContext,那么需要执行上述命令。
得到了hiveContext,我们就可以对Hive进行操作了,首先选中saledata数据库,然后列出该数据库下面的所有表,如下所示:
hiveContext.sql("use saledata")
hiveContext.sql("show tables").collect().foreach(println)
输出结果,如下所示:
下面还会进行三个SQL操作,如下所示:
(1)查询一下所有订单中每年的销售单数、销售总额
1 scala& hiveContext.sql("select c.theyear, count(distinct a.ordernumber), sum(b.amount) from tblStock a join tblStockDetail b on a.ordernumber=b.ordernumber join tbldate c on a.dateid=c.dateid group by c.theyear order by c.theyear").collect().foreach(println)
输出结果,如下所示:
说明:上述的HQL主要是把三张表(tblStock,tblStockDetail和tbldate)进行了join操作,然后按照theyear分组和排序。如何将join操作转换为MapReduce操作的过程自己还不清楚,还需要继续深入学习Hive的工作原理。
(2)求出所有订单中每年最大金额订单的销售额
(3)求出所有订单中每年最畅销货品
七. Spark SQL综合应用
(1)店铺分类
我们使用Spark SQL和MLlib对店铺进行聚类。实验采用的数据集为tblStock,根据销售数量和销售金额这两个特征进行聚类。如下所示:
def main (args: Array[String]): Unit = {
// set run environment
val sparkConf = new SparkConf()
.setAppName("SQLMLlib")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
// query amount and price of every store
hiveContext.sql("use saledata")
// set the number of partition when spark sql shuffle
hiveContext.sql("set spark.sql.shuffle.partitions=20")
// hiveContext.sql return org.apache.spark.sql.DataFrame
val sqldata = hiveContext.sql("select a.locationid, sum(b.qty) totalqty, sum(b.amount) totalamount from tblStock a " +
"join tblstockdetail b on a.ordernumber = b.ordernumber group by a.locationid")
// transfer querydata into vector
val parsedData = sqldata.map {
case Row(_, totalqty, totalamount) =&
val features = Array[Double](totalqty.toString.toDouble, totalamount.toString.toDouble)
Vectors.dense(features)
println("*******************************************this is a test1***********************************************")
val numClusters = 3
val numIterations = 20
val model = KMeans.train(parsedData, numClusters, numIterations)
val results2 = sqldata.map {
case Row(locationid, totalqty, totalamount) =&
val features = Array[Double](totalqty.toString.toDouble, totalamount.toString.toDouble)
val linevectore = Vectors.dense(features)
val prediction = model.predict(linevectore)
locationid + " " + totalqty + " " + totalamount + " " + prediction
}.saveAsTextFile(args(0))
println("*******************************************this is a test2***********************************************")
需要说明的是,上述代码中SQL的结果,如下所示:
编译打包后,使用spark-submit提交JAR包,如下所示:
spark-submit --master yarn-client --class SQLMLlib ./HelloScala.jar /output1
执行命令hdfs dfs -getmerge /output1 result.txt和cat result.txt输出结果,如下所示:
1 GUIHE 0 0
DY 355 55195 1
YINZUO 9 2
TAIHUA 2 2
DOGNGUAN 3 0
Spark SQL在进行shuffle时默认的partition为200个,我们使用命令set spark.sql.shuffle.partitions=20设置partition的数量。因此,输出目录/output1如下所示:
说明:运行同一个查询语句,参数改变后,Task(partition)的数量就由200变成20。
(2)PageRank
八. Spark SQL调优操作
九. Spark与Hive整合
Spark SQL与Hive整合,如下所示:
(1)编辑spark-env.sh文件
export HIVE_HOME=$HIVE_HOME
export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.38-bin.jar:$SPARK_CLASSPATH
(2)拷贝hive-site.xml到$SPARK_HOME/conf下面
cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf
(3)重启Spark集群
sbin/stop-all.sh
sbin/start-all.sh
查看Spark 1.5中Spark SQL and DataFrame官方文档:When working with Hive one must construct a HiveContext, which inherits from SQLContext, and adds support for finding tables in the MetaStore and writing queries using HiveQL. Users who do not have an existing Hive deployment can still create a HiveContext. When not configured by the hive-site.xml, the context automatically creates metastore_db and warehouse in the current directory. 这句话应该是说即使不进行Spark SQL与Hive的整合,在Spark SQL中也是可以操作Hive的,即在spark-sql中可以操作Hive。
参考文献:
[1] Spark SQL深度理解篇:模块实现、代码结构及执行流程总览:
[2] Spark SQL Programming Guide:
[3] Spark SQL小结:
做人要地道,好人有好报;做事要踏实,步履才坚实!听从命运安排的是凡人;主宰自己命运的是强者;没有主见的是盲从,三思而行的是智者。财富是一时的朋友,而朋友才是永久的财富;荣誉是一时的荣耀,做人才是永久的根本;学历是一时的知识,学习才是永久的智慧!
1.&&Openstack Restful API 开发框架 Paste + PasteDeploy + Routes + WebOb
2.&&Openstack liberty 云主机迁移源码分析之静态迁移2
3.&&OpenStack之Neutron源码分析
Neutron-server初始化
4.&&手动安装Openstack Mikita. Keystone安装
5.&&Openstack Nova 源码分析 — Create instances (nova-conductor阶段)
6.&&Devstack单节点环境实战配置
7.&&VMware 接入 Openstack — 使用 Openstack 创建 vCenter 虚拟机
1.&&天天24小时时时更新迅雷共享账号:日免费迅雷会员xunlei-vip 18点00分 发布
2.&&MySQL学习笔记4:操作数据表中的记录(增删改查)
3.&&AnimatedPathView实现自定义图片标签
4.&&Swift基础之集成单选按钮横竖两种样式
5.&&【Linux 系统编程】shell 命令和流程控制(二)
6.&&【Linux 系统编程】shell 脚本基础学习(一)
7.&&【Linux 系统编程】shell 流程控制loop和引号(三)

我要回帖

更多关于 spark assembly jar包 的文章

 

随机推荐