2015-09-25 3 views
1

Я пытаюсь запустить команду Hadoop в локальном режиме. Я работаю на Mac OS X 10.10.5, и я получаю сообщение об ошибке, помещая файл в HDFS. Вот сообщение об ошибке из моей команды Hadoop:Hadoop "failed on connection exception: java.net.ConnectException: Connection failed"

$ sudo hadoop fs -put HG00103.mapped.ILLUMINA.bwa.GBR.low_coverage.20120522.bam /usr/ds/genomics 
    Password: 
    15/09/25 10:10:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
    put: Call From BlueMeanie/10.0.1.5 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 

Вот подробности моей системы:

$ java -version 
java version "1.8.0_05" 
Java(TM) SE Runtime Environment (build 1.8.0_05-b13) 
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) 

$ hadoop version 
Hadoop 2.3.0 
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123 
Compiled by jenkins on 2014-02-11T13:40Z 
Compiled with protoc 2.5.0 
From source with checksum dfe46336fbc6a044bc124392ec06b85 
This command was run using /Users/davidlaxer/hadoop-`2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar` 

$ cat /etc/hosts 
## 
# Host Database 
# 
# localhost is used to configure the loopback interface 
# when the system is booting. Do not change this entry. 
## 
127.0.0.1 localhost 
10.0.1.5 BlueMeanie 
255.255.255.255 broadcasthost 
::1 localhost 
fe80::1%lo0 localhost 

$ telnet 10.1.1.5 9000 
Trying 10.1.1.5... 
^C 
$ telnet localhost 9000 
Trying ::1... 
telnet: connect to address ::1: Connection refused 
Trying 127.0.0.1... 
telnet: connect to address 127.0.0.1: Connection refused 
Trying fe80::1... 
telnet: connect to address fe80::1: Connection refused 
telnet: Unable to connect to remote host 

$ env | grep HADOOP 
HADOOP_HOME=/Users/dbl/hadoop-2.3.0/ 
HADOOP_CONF_DIR=/Users/dbl/hadoop-2.3.0/etc 

$ cat core_site.xml 

<configuration> 
    <property> 
     <name>fs.defaultFS</name> 
     <value>hdfs://localhost:9000</value> 
    </property> 
</configuration> 

$ cat hdfs-site.xml 
<configuration> 
    <property> 
     <name>dfs.replication</name> 
     <value>1</value> 
    </property> 
</configuration> 
$ cat yarn_site.xml 
<configuration> 
    <property> 
     <name>yarn.nodemanager.aux-services</name> 
     <value>mapreduce_shuffle</value> 
    </property> 
</configuration> 
$ cat mapred_site.xml 
<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
</configuration> 

$ sbin/start-dfs.sh 
Starting namenodes on [2015-09-25 16:36:54,540 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
localhost] 
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known 
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known 
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known 
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known 
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known 
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known 
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known 
16:36:54,540: ssh: Could not resolve hostname 16:36:54,540: nodename nor servname provided, or not known 
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known 
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known 
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known 
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known 
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known 
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known 
localhost: namenode running as process 99664. Stop it first. 
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known 
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known 
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known 
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known 
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out 
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out 
cat: /Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf/slaves: No such file or directory 
Starting secondary namenodes [2015-09-25 16:39:26,863 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
0.0.0.0] 
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known 
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known 
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known 
to: ssh: Could not resolve hostname to: nodename nor servname provided, or not known 
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known 
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known 
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known 
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known 
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known 
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known 
16:39:26,863: ssh: Could not resolve hostname 16:39:26,863: nodename nor servname provided, or not known 
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known 
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known 
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known 
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known 
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known 
0.0.0.0: secondarynamenode running as process 99006. Stop it first. 
2015-09-25: ssh: Could not resolve hostname 2015-09-25: nodename nor servname provided, or not known 
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known 
(NativeCodeLoader.java:<clinit>(62)): ssh: connect to host (NativeCodeLoader.java:<clinit>(62)) port 22: Operation timed out 
util.NativeCodeLoader: ssh: connect to host util.NativeCodeLoader port 22: Operation timed out 
+0

Локальный режим, как в ** одном кластере ** или ** псевдораспределенный **? –

+0

Единый кластер. – dbl001

ответ

3

Ну работает в одном режиме узел не требует запуска NameNode, Datanode et al.

Single Node or Standalone работы из коробки со стандартной установкой Hadoop с требованием, чтобы вы установили fs.defaultFS к file:///, что означает локальную файловую систему.


Если вы хотите запустить в pseudo distributed (как я предполагаю, что вы хотите от вашей конфигурации и тот факт, вы запускали start-dfs.sh), вы также должны помнить, что связь между демонами выполняется с ssh так что вам нужно :

  • Редактировать ваш shd_config файл (после установки SSH и поддержку shd_config вверх)
  • Добавить порт 9000 (и я считаю порт 8020 к нему).

Затем перезапустите ssh и проверить, можно ли подключиться к localhost через ssh. Вероятно, это то, о чем ваши странные сообщения при запуске Namenode и Datanode.

+0

Как исправить проблему с разрешением ... put: Permission denied: user = root, access = WRITE, inode = "/ user/dbl": dbl-xr-x – dbl001

+0

Взгляните на [эти] (http: /stackoverflow.com/questions/11593374/permission-denied-at-hdfs). Это обычная проблема с хаопом и ослабление dfs.permission, вероятно, сейчас лучше. К сожалению, я не работал с Mac, чтобы узнать специфику их разрешений и предложить более надежное решение (хотя я предполагаю, что это основано на Unix, похоже, похоже на то, что я знаю). –

+0

Любые идеи об этом Предупреждение: WARN util.NativeCodeLoader: невозможно загрузить библиотеку native-hadoop для вашей платформы ... используя встроенные классы Java, где применимоHADOOP_HOME =/Пользователи/davidlaxer/hadoop-2.3.0 HADOOP_COMMON_LIB_NATIVE_DIR =/Пользователи/davidlaxer /hadoop-2.3.0/lib/native HADOOP_CONF_DIR =/Пользователи/davidlaxer/hadoop-2.3.0/etc/hadoop/conf HADOOP_OPTS = -Djava.library.path =/Пользователи/davidlaxer/hadoop-2.3.0/lib – dbl001

Смежные вопросы