В течение некоторого времени я использую sparklyr
пакет для подключения к COMPANYS Hadoop кластера с помощью кода:sparklyr + rsparkling: Ошибка при подключении к кластеру
library(sparklyr)
Sys.setenv(SPARK_HOME="/opt/spark/")
Sys.setenv(HADOOP_CONF_DIR="/etc/hadoop/conf.cloudera.yarn")
Sys.setenv(JAVA_HOME="/usr/lib/jvm/jre")
system('kinit -k -t user.keytab [email protected]')
sc <- spark_connect(master="yarn",
config = list(
default = list(
spark.submit.deployMode= "client",
spark.yarn.keytab= "user.keytab",
spark.yarn.principal= "[email protected]",
spark.executor.instances= 20,
spark.executor.memory= "4G",
spark.executor.cores= 4,
spark.driver.memory= "8G")))
и все работает отлично, но когда я пытаюсь добавить rsparkling
пакет, используя подобный код:
library(h2o)
library(rsparkling)
library(sparklyr)
options(rsparkling.sparklingwater.version = '2.0')
Sys.setenv(SPARK_HOME="/opt/spark/")
Sys.setenv(HADOOP_CONF_DIR="/etc/hadoop/conf.cloudera.yarn")
Sys.setenv(JAVA_HOME="/usr/lib/jvm/jre")
system('kinit -k -t user.keytab [email protected]')
sc <- spark_connect(master="yarn",
config = list(
default = list(
spark.submit.deployMode= "client",
spark.yarn.keytab= "user.keytab",
spark.yarn.principal= "[email protected]",
spark.executor.instances= 20,
spark.executor.memory= "4G",
spark.executor.cores= 4,
spark.driver.memory= "8G")))
Я получаю сообщение об ошибке:
Error in force(code) :
Failed while connecting to sparklyr to port (8880) for sessionid (9819): Sparklyr gateway did not respond while retrieving ports information after 60 seconds Path: /opt/spark-2.0.0-bin-hadoop2.6/bin/spark-submit Parameters: --class, sparklyr.Backend, --packages, 'ai.h2o:sparkling-water-core_2.11:2.0','ai.h2o:sparkling-water-ml_2.11:2.0','ai.h2o:sparkling-water-repl_2.11:2.0', '/usr/lib64/R/library/sparklyr/java/sparklyr-2.0-2.11.jar', 8880, 9819---- Output Log ----
Ivy Default Cache set to: /opt/users/user/.ivy2/cache The jars for the packages stored in: /opt/users/user/.ivy2/jars :: loading settings :: url = jar:file:/opt/spark-2.0.0-bin-hadoop2.6/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml ai.h2o#sparkling-water-core_2.11 added as a dependency ai.h2o#sparkling-water-ml_2.11 added as a dependency ai.h2o#sparkling-water-repl_2.11 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 confs: [default]---- Error Log ----
In addition: Warning messages: 1: In if (nchar(config[[e]]) == 0) found <- FALSE : the condition has length 1 and only the first element will be used 2: In if (nchar(config[[e]]) == 0) found <- FALSE : the condition has length 1 and only the first element will be used
Я новичок в spark
и clusters
и не совсем уверен, что теперь делать. Любая помощь будет очень оценена. Моя первая мысль отсутствовала jar
файлов для sparkling water
на стороне cluster
, я прав?