目的:
通过hadoop的共享目录,可以将Spark运行的状态通过运行在Kubernetes的History Server进行展示和跟踪。
前提:
保证Hadoop HDFS已经顺利运行并且事先在hdfs建立如下目录:
hadoop fs -mkdir /eventLog
在Kubernetes安装Spark History Server
1:获取chart代码
git clone https://github.com/banzaicloud/banzai-charts.git
由于hdfs需要以非root用户连接,并且需要暴露对外服务端口所以作如下代码修改:
cd banzai-charts/spark-hs/
git diff
diff --git a/spark-hs/templates/deployment.yaml b/spark-hs/templates/deployment.yaml
index
env:
+ - name: HADOOP_USER_NAME
+ value: hadoop
- name: SPARK_NO_DAEMONIZE
value: "true"
diff --git a/spark-hs/templates/service.yaml b/spark-hs/templates/service.yaml
name: {{ .Chart.Name }}
+ nodePort: 30555
+ type: NodePort
selector:
2: 获取镜像
docker pull spark-history-server:v2.2.1-k8s-1.0.30
3: 通过helm安装
进入spark-hs目录,执行安装命令;
helm install --set app.logDirectory=hdfs://xx.xx.xx.xx:9000/eventLog .
4: 运行和访问
在submit Spark task的时候指定如下配置:
--conf spark.eventLog.dir=hdfs://xx.xx.xx.xx:9000/eventLog
--conf spark.eventLog.enabled=true
之后就可以通过GUI 访问到History Server