一、首先创建新目录testFiles,并在目录下创建两个测试数据文本文件如下:
[root@SC-026 hadoop-1.0.3]# mkdir testFiles [root@SC-026 hadoop-1.0.3]# cd testFiles/ [root@SC-026 testFiles]# echo "hello world, bye bye, world." > file1.txt [root@SC-026 testFiles]# echo "hello hadoop, how are you? hadoop." > file2.txt
二、将本地文件系统上的./testFiles目录拷贝到HDFS的根目录下,目录名为input。
遇到问题,报错信息如下:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/root/input. Name node is in safe mode.
问题说明hadoop的namenode处在安全模式下,通过以下方式就可以离开安全模式,再次执行拷贝就成功了:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfsadmin -safemode leave Safe mode is OFF [root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -put ./testFiles input
三、执行测试任务,输出到output:
[root@SC-026 hadoop-1.0.3]# bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output 12/08/31 09:21:34 INFO input.FileInputFormat: Total input paths to process : 2 12/08/31 09:21:34 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/08/31 09:21:34 WARN snappy.LoadSnappy: Snappy native library not loaded 12/08/31 09:21:35 INFO mapred.JobClient: Running job: job_201208310909_0001 12/08/31 09:21:36 INFO mapred.JobClient: map 0% reduce 0% 12/08/31 09:21:57 INFO mapred.JobClient: map 50% reduce 0% 12/08/31 09:22:00 INFO mapred.JobClient: map 100% reduce 0% 12/08/31 09:22:12 INFO mapred.JobClient: map 100% reduce 100% 12/08/31 09:22:16 INFO mapred.JobClient: Job complete: job_201208310909_0001 12/08/31 09:22:16 INFO mapred.JobClient: Counters: 29 12/08/31 09:22:16 INFO mapred.JobClient: Job Counters 12/08/31 09:22:16 INFO mapred.JobClient: Launched reduce tasks=1 12/08/31 09:22:16 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=27675 12/08/31 09:22:16 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/08/31 09:22:16 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/08/31 09:22:16 INFO mapred.JobClient: Launched map tasks=2 12/08/31 09:22:16 INFO mapred.JobClient: Data-local map tasks=2 12/08/31 09:22:16 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14460 12/08/31 09:22:16 INFO mapred.JobClient: File Output Format Counters 12/08/31 09:22:16 INFO mapred.JobClient: Bytes Written=78 12/08/31 09:22:16 INFO mapred.JobClient: FileSystemCounters 12/08/31 09:22:16 INFO mapred.JobClient: FILE_BYTES_READ=136 12/08/31 09:22:16 INFO mapred.JobClient: HDFS_BYTES_READ=278 12/08/31 09:22:16 INFO mapred.JobClient: FILE_BYTES_WRITTEN=64909 12/08/31 09:22:16 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=78 12/08/31 09:22:16 INFO mapred.JobClient: File Input Format Counters 12/08/31 09:22:16 INFO mapred.JobClient: Bytes Read=64 12/08/31 09:22:16 INFO mapred.JobClient: Map-Reduce Framework 12/08/31 09:22:16 INFO mapred.JobClient: Map output materialized bytes=142 12/08/31 09:22:16 INFO mapred.JobClient: Map input records=2 12/08/31 09:22:16 INFO mapred.JobClient: Reduce shuffle bytes=142 12/08/31 09:22:16 INFO mapred.JobClient: Spilled Records=22 12/08/31 09:22:16 INFO mapred.JobClient: Map output bytes=108 12/08/31 09:22:16 INFO mapred.JobClient: CPU time spent (ms)=3480 12/08/31 09:22:16 INFO mapred.JobClient: Total committed heap usage (bytes)=411828224 12/08/31 09:22:16 INFO mapred.JobClient: Combine input records=11 12/08/31 09:22:16 INFO mapred.JobClient: SPLIT_RAW_BYTES=214 12/08/31 09:22:16 INFO mapred.JobClient: Reduce input records=11 12/08/31 09:22:16 INFO mapred.JobClient: Reduce input groups=10 12/08/31 09:22:16 INFO mapred.JobClient: Combine output records=11 12/08/31 09:22:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=447000576 12/08/31 09:22:16 INFO mapred.JobClient: Reduce output records=10 12/08/31 09:22:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1634324480 12/08/31 09:22:16 INFO mapred.JobClient: Map output records=11
四、查看结果:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -cat output/* are 1 bye 1 bye, 1 hadoop, 1 hadoop. 1 hello 2 how 1 world, 1 world. 1 you? 1 cat: File does not exist: /user/root/output/_logs
将结果从HDFS复制到本地再查看:
[root@SC-026 hadoop-1.0.3]# bin/hadoop dfs -get output output [root@SC-026 hadoop-1.0.3]# cat output/* cat: output/_logs: 是一个目录 are 1 bye 1 bye, 1 hadoop, 1 hadoop. 1 hello 2 how 1 world, 1 world. 1 you? 1
备注:bin/hadoop dfs –help 可以了解各种 HDFS命令的使用。