2016-11-02 2 views
0

Я развернул поток данных весеннего облака на виртуальном кластере пряжи. начиная с сервера ./bin/dataflow-server-yarn выполняется правильно. и возвращаетОшибка соединения нити SCDF

2016-11-02 10:31:59.786 INFO 42493 --- [   main] o.s.i.endpoint.EventDrivenConsumer  : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel 
2016-11-02 10:31:59.787 INFO 42493 --- [   main] o.s.i.channel.PublishSubscribeChannel : Channel 'spring-cloud-dataflow-server-yarn:9393.errorChannel' has 1 subscriber(s). 
2016-11-02 10:31:59.787 INFO 42493 --- [   main] o.s.i.endpoint.EventDrivenConsumer  : started _org.springframework.integration.errorLogger 
2016-11-02 10:31:59.896 INFO 42493 --- [   main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 9393 (http) 
2016-11-02 10:31:59.901 INFO 42493 --- [   main] o.s.c.d.server.yarn.YarnDataFlowServer : Started YarnDataFlowServer in 16.026 seconds (JVM running for 16.485) 

Затем я могу начать ./bin/dataflow-shell, отсюда я могу импортировать приложения создавать и список потоков без ошибок; Однако, если я пытаюсь развернуть созданный поток следующее сообщение об ошибке соединения происходит

2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploy request for o[email protected]23d59aea 
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploying request for definition [[email protected] name = 'time', properties = map['spring.cloud.stream.bindings.output.producer.requiredGroups' -> 'ticktock', 'spring.cloud.stream.bindings.output.destination' -> 'ticktock.time']] 
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Parameters for definition {spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time} 
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deployment properties for request {spring.cloud.deployer.group=ticktock} 
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport  : started org.s[email protected]6d68eeb7 
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport  : started RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE// uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786/id=null 
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springf[email protected]79158163 
2016-11-02 10:52:58.280 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springf[email protected]79158163 
2016-11-02 10:52:59.281 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:00.282 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:01.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:02.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:03.284 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:04.285 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:05.286 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:06.287 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:07.288 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:08.289 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:08.290 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMapp--spring.yarn.appName=scdstream:app:ticktock,--spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/ found YarnCloudAppServiceApplication org.springf[email protected]7d3bc3f0 
2016-11-02 10:53:09.294 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:10.295 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:11.296 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:12.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:13.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:14.298 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:15.299 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:16.300 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:17.301 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:18.302 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client    : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-02 10:53:18.361 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.fs.TrashPolicyDefault : Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. 
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport  : stopped org.s[email protected]6d68eeb7 
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport  : stopped RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE// uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786/id=null 
2016-11-02 10:53:18.747 ERROR 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.AbstractDeployerStateMachine : Passing through error state DefaultStateContext [stage=STATE_ENTRY, message=GenericMessage [payload=DEPLOY, headers={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}], messageHeaders={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}, extendedState=DefaultExtendedState [variables={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, appname=scdstream:app:ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, messageId=f0f7b54e-e59d-69c0-a405-f0907fa46343, clusterId=ticktock:time, error=org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused}], transition=org.sp[email protected]77c2bbfc, stateMachine=UNDEPLOYMODULE DESTROYCLUSTER STOPCLUSTER DEPLOYMODULE RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE ERROR READY ERROR_JUNCTION UNDEPLOYEXIT DEPLOYEXIT/ERROR/uuid=436b73fe-991a-4c22-a418-334f895d41e5/id=null, source=null, target=null, sources=null, targets=null, exception=null] 
2016-11-02 10:53:18.748 WARN 42788 --- [nio-9393-exec-8] o.s.c.d.s.c.StreamDeploymentController : Exception when deploying the app StreamAppDefinition [streamName=ticktock, name=time, registeredAppName=time, properties={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}]: java.util.concurrent.ExecutionException: org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 

изменения IP-адреса для локального сервера, дали одинаковые результаты. вот мой server.yml

logging: 
    level: 
    org.apache.hadoop: INFO 
    org.springframework.yarn: INFO 
maven: 
    remoteRepositories: 
    springRepo: 
     url: https://repo.spring.io/libs-snapshot 
spring: 
    main: 
    show_banner: false 
    # Configured for Hadoop single-node running on localhost. Replace with property values reflecting  your 
    # actual Hadoop cluster when running in a distributed environment. 
    hadoop: 
    fsUri: hdfs://192.168.137.135:8020 
    resourceManagerHost: 192.168.137.135 
    resourceManagerPort: 8032 
    resourceManagerSchedulerAddress: 192.168.137.135:8030 
    # Configured for Redis running on localhost. Replace at least host property when running in a 
    # distributed environment. 
    redis: 
    port: 6379 
    host: 192.168.137.135 
    # Configured for an embedded in-memory H2 database. Replace the datasource configuration with  properties 
    # matching your preferred database to be used instead, if needed, or when running in a distributed  environment. 
    #rabbitmq: 
    # addresses: localhost:5672 
    # for default embedded database 
    datasource: 
    url: jdbc:h2:tcp://localhost:19092/mem:dataflow 
    username: sa 
    password: 
    driverClassName: org.h2.Driver 
# # for mysql/mariadb datasource 
# datasource: 
# url: jdbc:mysql://localhost:3306/yourDB 
# username: yourUsername 
# password: yourPassword 
# driverClassName: org.mariadb.jdbc.Driver 
# # for postgresql datasource 
# datasource: 
# url: jdbc:postgresql://localhost:5432/yourDB 
# username: yourUsername 
# password: yourPassword 
# driverClassName: org.postgresql.Driver 
    cloud: 
    stream: 
     kafka: 
     binder: 
      brokers: localhost:9093 
      zkNodes: localhost:2181 
    config: 
     enabled: false 
     server: 
     bootstrap: true 
    deployer: 
     yarn: 
     app: 
      baseDir: /dataflow 
      streamappmaster: 
      memory: 512m 
      virtualCores: 1 
      javaOpts: "-Xms512m -Xmx512m" 
      streamcontainer: 
      priority: 5 
      memory: 256m 
      virtualCores: 1 
      javaOpts: "-Xms64m -Xmx256m" 
      taskappmaster: 
      memory: 512m 
      virtualCores: 1 
      javaOpts: "-Xms512m -Xmx512m" 
      taskcontainer: 
      priority: 10 
      memory: 256m 
      virtualCores: 1 
      javaOpts: "-Xms64m -Xmx256m" 
# yarn: 
# hostdiscovery: 
#  pointToPoint: false 
#  loopback: false 
#  preferInterface: ['eth', 'en'] 
#  matchIpv4: 192.168.0.0/24 
#  matchInterface: eth\\d* 

ответ

0

Я нашел обходной путь, кажется, что порт был проблемой. Я изменил номер порта на сайте yain-site.xml и сделал то же самое на серверах.xml и voila !!

Смежные вопросы