内容简介:翻译自:https://stackoverflow.com/questions/33591393/making-spark-use-etc-hosts-file-for-binding-in-yarn-cluster-mode
在具有两个inets的机器上设置火花簇,一个公共另一个私有.集群中的/ etc / hosts文件具有集群中所有其他计算机的内部ip,就像这样.
internal_ip FQDN
但是,当我在YARN客户端模式(pyspark –master yarn –deploy-mode client)中通过pyspark请求SparkContext时,akka绑定到公共ip,因此会发生超时.
15/11/07 23:29:23 INFO Remoting: Starting remoting 15/11/07 23:29:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@public_ip:44015] 15/11/07 23:29:23 INFO util.Utils: Successfully started service 'sparkYarnAM' on port 44015. 15/11/07 23:29:23 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable. 15/11/07 23:31:30 ERROR yarn.ApplicationMaster: Failed to connect to driver at yarn_driver_public_ip:48875, retrying ... 15/11/07 23:31:30 ERROR yarn.ApplicationMaster: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver! at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:427) at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:293) at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:149) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:574) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:65) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:65) at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:572) at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:599) at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala) 15/11/07 23:31:30 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!) 15/11/07 23:31:30 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!) 15/11/07 23:31:30 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1446960366742_0002
从日志中可以看出,私有IP被完全忽略,如何让YARN和spark使用hosts文件中指定的私有IP地址?
使用Ambari(HDP 2.4)配置群集
1问题.
Spark使用Akka进行通信.
所以它更像是一个Akka问题,而不是Spark.
If you need to bind your network interface to a different address – use akka.remote.netty.tcp.bind-hostname and akka.remote.netty.tcp.bind-port settings.
翻译自:https://stackoverflow.com/questions/33591393/making-spark-use-etc-hosts-file-for-binding-in-yarn-cluster-mode
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- Golang Echo数据绑定中time.Time类型绑定失败
- 如何在Symfony的表单中添加一个未绑定字段,否则绑定到一个实体?
- js双向绑定
- 延迟静态绑定——static
- 绑定自定义事件
- angular组件双向绑定
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
构建高可用Linux服务器
余洪春 / 机械工业出版社华章公司 / 2011-11-1 / 79.00元
资深Linux/Unix系统管理专家兼架构师多年一线工作经验结晶,51CTO和ChinaUnix等知名社区联袂推荐。结合实际生产环境,从Linux虚拟化、集群、服务器故障诊断与排除、系统安全性等多角度阐述构建高可用Linux服务器的最佳实践。本书实践性非常强,包含大量企业级的应用案例及相应的解决方案,读者可以直接用这些方案解决在实际工作中遇到的问题。 全书一共10章。第1章以作者的项目实践为......一起来看看 《构建高可用Linux服务器》 这本书的介绍吧!