hadoop安装自己挖坑自己填

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u012400305/article/details/74331492
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:981)
	at org.apache.hadoop.util.Shell.run(Shell.java:884)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1180)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:293)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:425)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:285)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:88)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

For more detailed output, check the application tracking page: http://master:8088/cluster/app/application_1499152375407_0001 Then click on links to logs of each attempt.
. Failing the application.
安装hadoop3.0的时候  安装结束  测试一个MR的时候 出现找不到包的可能   是因为yarn-site.xml的配置问题  
 
 
一般  增加
<property>
 <name>yarn.application.classpath</name>
 <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*</value>
</property>是可以的  但是我在/etc/profile文件增加变量的时候 启动start-all.sh失效  以为这些变量的指定是去找jar包 并没有sbin目录   所以 我的处理方式是在yarn-site.xml 下增加这些的绝对路径  启动即可解决 
 
 
有时还会遇到没有root用户的权限问题   在hadoop-env.sh下增加 
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
即可
 
 
 
 
 

在子节点使用hdfs dfs -ls / 报错有时少包 使用hadoop classpath 查看子节点上的路径是不全的 解决:添加HADOOP_CLASSPATH变量 在hadoop-env.sh里同样添加变量即可

 
 
 

猜你喜欢

转载自blog.csdn.net/u012400305/article/details/74331492