2018北京积分落户数据,用pyspark、pyecharts大数据可视化分析,按用户所在单位分析

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/huoyongliang/article/details/83212027

2018北京积分落户数据,用pyspark、pyecharts大数据可视化分析,按用户所在单位分析。

按用户所在单位分组统计,取前50个。

#导入积分落户人员名单数据
df = spark.read.csv('jifenluohu.csv', header='true', inferSchema='true')
df.cache()
df.createOrReplaceTempView("jflh")
#df.show()
spCount = spark.sql("select unit as name,count(1) as ct from jflh group by unit order by ct desc limit 50").collect()
name = [row.name for row in spCount]
count = [row.ct for row in spCount]

#图表展示
from pyecharts import Bar
bar = Bar("2018北京积分落户用户数据分析", "按单位汇总统计用户数量")
bar.add("用户数量", name, count)
bar

猜你喜欢

转载自blog.csdn.net/huoyongliang/article/details/83212027