Spark MLlib 入门学习笔记 - FPGrowth频繁项集算法

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/hjh00/article/details/72854722

FPGrowth频繁项集算法

关联规则(AssociationRule)是研究不同类型的物品相互直接关联关系的规则。Apriori算法是关联规则挖掘的一个经典算法。FPGrowth算法克服了Apriori算法需要对数据集进行多次读取的弊端,只需要读取两次数据集。

支持度表示X和Y中的项在同一情况下出现的概率,公式: Support(A->B)=P(A U B)。置信度表示A出现了,B出现的概率,公式: Confidence(A->B)=P(A | B)。

运行spark自带的列子

程序

package ass
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.fpm.FPGrowth
import org.apache.spark.rdd.RDD

object fpgrowth {

  def main(args: Array[String]) {

    val conf = new SparkConf().setMaster(args(0)).setAppName("fpgrowth")
    val sc = new SparkContext(conf)

    val data = sc.textFile(args(1))

    val transactions: RDD[Array[String]] = data.map(s => s.trim.split(' '))

    val fpg = new FPGrowth()
      .setMinSupport(0.3)
      .setNumPartitions(10)
    val model = fpg.run(transactions)

    model.freqItemsets.collect().foreach { itemset =>
      println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
    }

    val minConfidence = 0.8
    model.generateAssociationRules(minConfidence).collect().foreach { rule =>
      println(
        rule.antecedent.mkString("[", ",", "]")
          + " => " + rule.consequent.mkString("[", ",", "]")
          + ", " + rule.confidence)
    }
  }
}

数据

r z h k p
z y x w v u t s
s x o n r
x z y m t s q e
z
x z y r q t p
运行结果(太多了,省略大部分)

[z], 5
[x], 4
[x,z], 3
[y], 3
[y,x], 3
[y,x,z], 3
[y,z], 3
[r], 3
......
[t,s,y] => [x], 1.0
[t,s,y] => [z], 1.0
[y,x,z] => [t], 1.0
[y] => [x], 1.0
[y] => [z], 1.0
[y] => [t], 1.0
[p] => [r], 1.0
[p] => [z], 1.0
[q,t,z] => [y], 1.0
[q,t,z] => [x], 1.0
[q,y] => [x], 1.0
[q,y] => [z], 1.0
[q,y] => [t], 1.0
[t,s,x] => [y], 1.0
[t,s,x] => [z], 1.0
[q,t,y,z] => [x], 1.0
[q,t,x,z] => [y], 1.0
......




猜你喜欢

转载自blog.csdn.net/hjh00/article/details/72854722