创建RDD方式

I : 通过外部的存储系统创建RDD,如本地文件,hdfs等

scala> val a = sc.textFile("/root.text.txt")
a: org.apache.spark.rdd.RDD[String] = /root.text.txt MapPartitionsRDD[22] at textFile at <console>:24
scala> val a = sc.textFile("hdfs://hadoop-01:9000/text.txt")
a: org.apache.spark.rdd.RDD[String] = hdfs://hadoop-01:9000/text.txt MapPartitionsRDD[24] at textFile at <console>:24

II :将Driver的scala集合通过并行化的方式变成RDD(通常用于测试,实验)

scala> val a = sc.parallelize(List(1,2,4,5))
a: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[25] at parallelize at <console>:24

III : 调用已存在的RDD的Transformation,会生成一个新的RDD

scala> val b = a.map(x=>(x,1))
b: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[26] at map at <console>:26

RDD之Transformation的特点:
● lazy,需要用到的时候才进行计算
● 生成新的RDD

猜你喜欢

转载自blog.csdn.net/bb23417274/article/details/82922926