大数据之Spark单词统计


题目三:使用Spark Core 统计文件中以spark开头的单词中,每个单词出现的次数(共计30分)

spark-core hadoop linux java spark-sql
storm html css vue spark
spring springboot struts
spark-hive
mapreduce hbase flume kafka
storm html css vue spark javascript
spring springboot struts
spark-hive php



1)创建spark项目,读取以上内容文件生成RDD5分)
2)将文章内容进行切分成字符串5分)
3)过滤出spark开头的字符串5分)

4)对过滤出的字符串进行相应的运算处理5分)

5)将处理结果进行累加5分)

import org.apache.spark.{SparkConf, SparkContext}

object Test3 {
System.setProperty("hadoop.home.dir", "D:\\Studyingimportant\\hadoop-2.9.2");
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[2]").setAppName("sort")
val sc = new SparkContext(conf)

sc.textFile("D:\\\\words.txt")
.flatMap(line => line.split(" "))
.filter(word => word.startsWith("spark"))
.map(word => (word,1))
.reduceByKey((x,y) => x+y)
.foreach(println)
}
}

猜你喜欢

转载自www.cnblogs.com/whyuan/p/12968858.html