本例环境为Ubuntu-Server-15.10 Hadoop-2.7.2 Hadoop1.X.X和Hadoop2.X.X API和目录结构、配置文件都有很大不同,之前按照某书(用的例子是Hadoop1.0.1)安装,结果出现很多问题。。。。后来经过一番周折解决了,能起来了,但是其实有很多问题。最后得出一个结论:官方文档最好用!
官方不建议使用sbin/start-all.sh 启动(执行该脚本有This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh提示: 该脚本以弃用,用start-dfs.sh 和 start-yarn.sh替代使用)使用这个重启Hadoop出现过8080、50070端口的服务没起,重新format一下文件系统再启动,50070起来了。
这是Hadoop-2.7.2(2016-01-26)版本的官方文档,地址为http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation
1.配置Java环境,安装jdk 我用的最新版本 Java 8
2.配置ssh远程连接,我用的apt-get方式默认安装。可以配置为ssh无密码登录,我用的默认登录模式(需要密码)
3.安装配置Hadoop
① To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors从Apache Download Mirrors下载得到一个最新的稳定Hadoop发行版
②Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:
解压下载得到的Hadoop发行版,编辑文件etc/hadoop/hadoop-env.sh 定义一些参数,如下:
# set to the root of your Java installation 设置你的Java安装根目录 export JAVA_HOME=/usr/java/latestTry the following command:试一下以下目录:
$ bin/hadoop
Pseudo-Distributed Operation 伪分布式操作
Configuration 配置
Use the following: 使用以下示例:
etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
etc/hadoop/hdfs-site.xml:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
Setup passphraseless ssh 设置无密码ssh
Now check that you can ssh to the localhost without a passphrase: 现在可以检查你是否可以不用密码使用ssh登录到localhost (这一步其实可以不用设置,只是后面启动Hadoop会需要密码)
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands: 如果你不可以不用密码使用ssh登录到localhost,执行如下命令:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys
Execution 执行
The following instructions are to run a MapReduce job locally.
以下教程用来运行一个本地MapReduce任务。
Format the filesystem:格式化文件系统:
$ bin/hdfs namenode -format
Start NameNode daemon and DataNode daemon:启动NameNode daemon和DataNode daemon
$ sbin/start-dfs.sh
The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).Hadoop daemon 日志在$HADOOP_LOG_DIR 目录下面(默认 $HADOOP_HOME/logs)
Browse the web interface for the NameNode; by default it is available at:浏览NameNode的web入口 它默认在: http://localhost:50070/