单机hadoop平台

hadoop平台操作系统环境设置

基本的系统设置:

主机名称:master
ip:192.168.1.109
用户/密码:root/oppo,hadoop/oppo

配置ip:

可以使用nmtui或者vim 配置文件的方法,这里我们选择修改配置文件

[root@master src]# vim /etc/sysconfig/network-scripts/ifcfg-enp0s3 
TYPE=Ethernet
BOOTPROTO=none	#静态IP
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=dfa1c686-93dd-49e4-abd9-d370892d923f
DEVICE=enp0s3
ONBOOT=yes	#系统启动时加载网卡
IPADDR=192.168.1.109	#ip
~                     
[root@master src]# systemctl restart network   #重启网卡 

设置服务器的名称(hostname)和绑定主机名与IP(hosts)

[root@master src]# vim /etc/hostname 
master
[root@master src]# reboot #或者新建个链接
[root@master src]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain

`192.168.1.109 master`

查看ssh服务

[root@master src]# systemctl status sshd
● sshd.service - OpenSSH server daemon
   Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
   `Active: active (running)` since 四 2021-03-18 19:41:07 CST; 23min ago
     Docs: man:sshd(8)
           man:sshd_config(5)
 Main PID: 934 (sshd)
   CGroup: /system.slice/sshd.service
           └─934 /usr/sbin/sshd

此时sshd服务是开启的

关闭防火墙和selinux

[root@master src]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   `Active: inactive (dead)`
     Docs: man:firewalld(1)

此时防火墙是关闭的,若没关闭,可先

[root@master src]# systemctl stop firewalld   #关闭防火墙
[root@master src]# systemctl disable firewalld #使得防火墙失效,彻底关闭防火墙 

查看selinux状态

[root@master src]# getenforce
`Disabled`  #此时为selinux不生效
若不是disabled,则使用
[root@master selinux]# vim /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
`SELINUX=disabled`   #将这个值修改为disabled

创建hadoop用户

[root@master selinux]# useradd hadoop
[root@master home]# ls
hadoop

安装jdk

下载jdk:

https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

检查系统中是否已经安装jdk:

[root@master src]# java -version
-bash: java: 未找到命令

系统没有安装jdk,我们安装下载好的jdk(/usr/local/src/jdk-16)

tar -zxvf jdk-16_linux-x64_bin.tar.gz

配置环境变量:

[root@master src]# vim  /etc/profile  
#在文件最底部添加
export JAVA_HOME=/usr/local/src/jdk-16
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JRE_HOME=$JAVA_HOME/jre
[root@master src]# source /etc/profile #刷新配置文件生效

测试看是否安装成功:
标准1:

[root@master src]# java -version
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)

标准2:

[root@master src]# javac
用法: javac <options> <source files>
其中, 可能的选项包括:
  @<filename>                  从文件读取选项和文件名
  -Akey[=value]                传递给注释处理程序的选项
  --add-modules <模块>(,<模块>)*
............

标准3:

[root@master src]# java -v 
Unrecognized option: -v
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
[root@master src]# java 
用法:java [options] <主类> [args...]
           (执行类)
..................

安装hadoop

下载hadoop:

wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.2.2/hadoop-3.2.2.tar.gz
cd /usr/local/src
tar -zxvf hadoop-3.2.2.tar.gz 

配置环境变量:

[root@master src]# vim /etc/profile
#最下面添加
export HADOOP_HOME=/usr/local/src/hadoop-3.2.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
[root@master src]# source /etc/profile  #刷新配置文件

检查hadoop是否安装成功:

[root@master src]# hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
 or    hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:
 ...............................

修改/usr/local/src/ 下的hadoop和jdk为用户hadoop所有:

chown -R hadoop:hadoop /usr/local/src/*

安装hadoop单机环境

配置hadoop配置文件:

配置hadoop配置文件的目的是告诉hadoop系统jdk的安装目录

vim /usr/local/src/hadoop/etc/hadooop/hadoop-env.sh
#找到"export JAVA_HOME="这行,去掉#,添加值为:
export JAVA_HOME=/usr/local/src/jdk-16   

测试hadoo本地模式的运行

切换到hadoop用户:

su - hadoop

创建输入数据目录

mkdir ~/input       
vim ~/input/data.txt
#添加
hello hadoop
hello java
hello chenfeng
hello hadoop

测试mapreduce运行

hadoop jar /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.2.jar wordcount ~/input/data.txt ~/output

运行结果:

[hadoop@oppo3 ~]$ hadoop jar /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.2.jar wordcount ~/input/data.txt ~/output 
2021-03-19 13:55:01,115 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-03-19 13:55:11,362 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-03-19 13:55:11,363 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2021-03-19 13:55:16,617 INFO input.FileInputFormat: Total input files to process : 1
...........................
2021-03-19 13:55:17,335 INFO mapred.LocalJobRunner: reduce task executor complete.
2021-03-19 13:55:18,055 INFO mapreduce.Job: Job job_local933413955_0001 running in uber mode : false
2021-03-19 13:55:18,057 INFO mapreduce.Job:  map 100% reduce 100%
2021-03-19 13:55:18,058 INFO mapreduce.Job: Job job_local933413955_0001 completed successfully
2021-03-19 13:55:18,063 INFO mapreduce.Job: Counters: 30
..........................

运行结果:

[hadoop@oppo3 ~]$ cd output/
[hadoop@oppo3 output]$ ls
part-r-00000  _SUCCESS
[hadoop@oppo3 output]$ 
[hadoop@oppo3 output]$ cat part-r-00000 
chenfeng        1
hadoop  2
hello   4
java    1

运行结果一致

注意:输出的目录不能事先创建,要是有output目录,先进行删除,再运行mapreduce任务

猜你喜欢

转载自blog.csdn.net/qq_40736702/article/details/114989539