最近自己在学习大数据领域,了解了不少知识,在自己现有的集群环境中模拟了如何在现有的集群上增加新的节点。这种场景在我们现实生活中也很常见,随着公司业务的增长,数据量越来越大,原有的数据节点的容量已经不能满足存储数据的需求,需要在原有集群基础上动态添加新的数据节点。
1)环境准备
(1)克隆一台虚拟机
(2)修改ip地址和主机名称
(3)修改xcall和xsync文件,增加新`增节点的同步ssh
(4)删除原来HDFS文件系统留存的文件
/opt/module/hadoop-2.7.2/data
2)服役新节点具体步骤
(1)在namenode的/opt/module/hadoop-2.7.2/etc/hadoop目录下创建dfs.hosts文件
[zhang@hadoop105 hadoop]$ pwd
/opt/module/hadoop-2.7.2/etc/hadoop
[zhang@hadoop105 hadoop]$ touch dfs.hosts
[zhang@hadoop105 hadoop]$ vi dfs.hosts
添加如下主机名称(包含新服役的节点)
hadoop102
hadoop103
hadoop104
hadoop105
(2)在namenode的hdfs-site.xml配置文件中增加dfs.hosts属性
<property> <name>dfs.hosts</name> <value>/opt/module/hadoop-2.7.2/etc/hadoop/dfs.hosts</value> </property> |
(3)刷新namenode
[zhang@hadoop102 hadoop-2.7.2]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
(4)更新resourcemanager节点
[zhang@hadoop102 hadoop-2.7.2]$ yarn rmadmin -refreshNodes
17/06/24 14:17:11 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8033
(5)在namenode的slaves文件中增加新主机名称
增加105 不需要分发
hadoop102
hadoop103
hadoop104
hadoop105
(6)单独命令启动新的数据节点和节点管理器
[zhang@hadoop105 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-atguigu-datanode-hadoop105.out
[zhang@hadoop105 hadoop-2.7.2]$ sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-atguigu-nodemanager-hadoop105.out
(7)在web浏览器上检查是否ok
3)如果数据不均衡,可以用命令实现集群的再平衡
[zhang@hadoop102 sbin]$ ./start-balancer.sh
starting balancer, logging to /opt/module/hadoop-2.7.2/logs/hadoop-atguigu-balancer-hadoop102.out
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
4)补充一下xcall文件和xsync文件
xcall文件如下:
#!/bin/bash
pcount=$#
if((pcount==0));then
echo no args;
exit;
fi
echo -------localhost---------
$@
for((host=101;host<=105;host++));do
echo --------hadoop$host----------
ssh hadoop$host $@
done
xsync文件如下:
#!/bin/bash
pcount=$#
if((pcount==0));then
echo no args;
exit;
fi
p1=$1
fname=`basename $p1`
echo fname=$fname
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir
user=`whoami`
for((host=102;host<105;host++));do
#echo $pdir/$fname $user@hadoop$host:$pdir
echo -----------hadoop$host------------
rsync -rvl $pdir/$fname $user@hadoop$host:$pdir
done
上述步骤仅供参考,如有错误,还望指点