spark+hadoop集群搭建-虚拟机
一、本次搭建环境说明
VMware Fusion
master:ubuntu16.04 64位 ip:172.16.29.11
slave1:ubuntu16.04 64位 ip:172.16.29.12
slave1:ubuntu16.04 64位 ip:172.16.29.13
jdk9.0.4
hadoop2.8.1
spark2.3.0
二、jdk、hadoop安装部署
参见上篇博文: hadoop搭建全分布式集群-虚拟机
三、安装、配置spark
> wget http://mirrors.shu.edu.cn/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz # 下载
> tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz # 解压
> mv spark-2.3.0-bin-hadoop2.7.tgz spark # 重命名
> mv ./spark/ /usr/local/ # 移动到/usr/local/目录下
配置环境变量,在/etc/profile文件尾部追加:
> vi /etc/profile
...
#spark
export SPARK_HOME=/usr/local/spark spark 路径
export SPARK_SCALA_VERSION=2.30 # spark 版本号
配置完成后,记得执行:
> source /etc/profile
进入/usr/local/spark/conf/,创建文件spark-env.sh:
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=hadoop11
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
变量说明
- JAVA_HOME:Java安装目录
- SCALA_HOME:Scala安装目录
- HADOOP_HOME:hadoop安装目录
- HADOOP_CONF_DIR:hadoop集群的配置文件的目录
- SPARK_MASTER_IP:spark集群的Master节点的ip地址
- SPARK_WORKER_MEMORY:每个worker节点能够最大分配给exectors的内存大小
- SPARK_WORKER_CORES:每个worker节点所占有的CPU核数目
- SPARK_WORKER_INSTANCES:每台机器上开启的worker节点的数目
进入/usr/local/spark/conf/,创建文件slaves:
hadoop12 # slave1机ip
hadoop13 # slave2机ip
同步slave1、slave2的配置
> scp -r /usr/local/spark/ yourname@hadoop12:/usr/local/
> scp -r /usr/local/spark/ yourname@hadoop13:/usr/local/
权限设置:
> sudo chmod -R /usr/local/spark/
> sudo chown -R yourname:yourname /usr/local/spark
启动spark,进入/usr/local/spark/
> ./sbin/start-all.sh
jps查看进程启动情况
> jps # Master机
Master
...
> jps # Slave机
Slave
...
打开http://172.16.29.11:8080
启动shell,进入/usr/local/spark/
> ./bin/pyspark
参考博文:
https://blog.csdn.net/weixin_36394852/article/details/76030317