spark kryo压缩报错问题

由于上游数据湖数据压缩格式改变
使用spark sql的thrift jdbc接口查询数据时报错

19/07/29 06:12:55 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 (TID 4, s015.test.com, executor 1): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 10300408. To avoid this, increase spark.kryoserializer.buffer.max value.
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

19/07/29 06:12:55 INFO scheduler.TaskSetManager: Starting task 1.1 in stage 1.0 (TID 5, s015.test.com, executor 1, partition 1, RACK_LOCAL, 8283 bytes)
19/07/29 06:12:57 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 3, s015.test.com, executor 1): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 10453339. To avoid this, increase spark.kryoserializer.buffer.max value.
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:315)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:367)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

初步分析是由于数据解压时内存不够用,需要在spark submit提交时指定参数

--conf spark.kryoserializer.buffer.max=512m \
--conf spark.kryoserializer.buffer=256m \

并且修改程序(增加并行度)

val resultRdd = hiveContext.sql(sql)
resultRdd.repartition(100).registerTempTable("a")
hiveContext.sql("insert overwrite table table_a select * from a")
发布了118 篇原创文章 · 获赞 25 · 访问量 15万+

猜你喜欢

转载自blog.csdn.net/lhxsir/article/details/97626629
今日推荐