内容简介:前面写过一篇1,修改主机名,所有节点机器hosts保持一至2,ssh免密码登录
前面写过一篇 hadoop集群安装配置 的文章。只用了二台机器,假如机器快满了,就需要在加机器。hadoop加节点,不需要重启hadoop服务。
一,增加节点的准备工作
1,修改主机名,所有节点机器hosts保持一至
2,ssh免密码登录
3,修改slaves,所有节点一样
scp把老节点的etc/hadoop的配置copy到新的节点
二,在新节点的启动
[root@bigserver3 hadoop]# ./sbin/hadoop-daemon.sh start datanode [root@bigserver3 hadoop]# ./sbin/yarn-daemon.sh start nodemanager [root@bigserver3 hadoop]# jps 1569 Jps 1401 DataNode 1499 NodeManager [root@bigserver3 hadoop]# yarn node -list 18/12/27 22:35:41 INFO client.RMProxy: Connecting to ResourceManager at bigserver1/10.0.0.237:8032 Total Nodes:2 Node-Id Node-State Node-Http-Address Number-of-Running-Containers bigserver3:40901 RUNNING bigserver3:8042 0 bigserver2:43959 RUNNING bigserver2:8042 0
三,均衡hdfs存储
1,设置环境变量
# echo "export PATH=/bigdata/hadoop/bin:$PATH" >> ~/.bashrc # source ~/.bashrc
这样就可以直接使用hadoop/bin目录下的命令了
2,配置均衡带宽,默认是1M
# hdfs dfsadmin -setBalancerBandwidth 52428800 //设置50M Balancer bandwidth is set to 52428800
3,均衡hdfs的存储
[root@bigserver3 hadoop]# ./sbin/start-balancer.sh -threshold 5 starting balancer, logging to /home/bigdata/hadoop/logs/hadoop-root-balancer-bigserver3.out
默认是10,数字越大均衡时间越短,越不均衡,数字越小均衡时间越长,越均衡
4,查看一下均衡后的结果
[root@bigserver3 hadoop]# hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Configured Capacity: 2382153052160 (2.17 TB) Present Capacity: 2381362919833 (2.17 TB) DFS Remaining: 2381087567872 (2.17 TB) DFS Used: 275351961 (262.60 MB) DFS Used%: 0.01% Under replicated blocks: 20 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (2): Name: 10.0.0.193:50010 (bigserver3) Hostname: bigserver3 Decommission Status : Normal Configured Capacity: 441499058176 (411.18 GB) DFS Used: 3000729 (2.86 MB) Non DFS Used: 392808039 (374.61 MB) DFS Remaining: 441103249408 (410.81 GB) DFS Used%: 0.00% //是否均衡主要看这二项 DFS Remaining%: 99.91% //是否均衡主要看这二项 Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Dec 27 22:38:51 EST 2018 Name: 10.0.0.236:50010 (bigserver2) Hostname: bigserver2 Decommission Status : Normal Configured Capacity: 1940653993984 (1.77 TB) DFS Used: 272351232 (259.73 MB) Non DFS Used: 397324288 (378.92 MB) DFS Remaining: 1939984318464 (1.76 TB) DFS Used%: 0.01% //是否均衡主要看这二项 DFS Remaining%: 99.97% //是否均衡主要看这二项 Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Dec 27 22:38:51 EST 2018
四,在master测试关闭和启动hadoop集群
[root@bigserver1 sbing]# ./stop-all.sh This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh Stopping namenodes on [bigserver1] bigserver1: stopping namenode bigserver2: stopping datanode bigserver3: stopping datanode Stopping secondary namenodes [0.0.0.0] 0.0.0.0: stopping secondarynamenode stopping yarn daemons stopping resourcemanager bigserver2: stopping nodemanager bigserver3: stopping nodemanager no proxyserver to stop [root@bigserver1 sbing]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [bigserver1] bigserver1: starting namenode, logging to /home/bigdata/hadoop/logs/hadoop-root-namenode-bigserver1.out bigserver2: starting datanode, logging to /home/bigdata/hadoop/logs/hadoop-root-datanode-bigserver2.out bigserver3: starting datanode, logging to /home/bigdata/hadoop/logs/hadoop-root-datanode-bigserver3.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /home/bigdata/hadoop/logs/hadoop-root-secondarynamenode-bigserver1.out starting yarn daemons starting resourcemanager, logging to /home/bigdata/hadoop/logs/yarn-root-resourcemanager-bigserver1.out bigserver2: starting nodemanager, logging to /home/bigdata/hadoop/logs/yarn-root-nodemanager-bigserver2.out bigserver3: starting nodemanager, logging to /home/bigdata/hadoop/logs/yarn-root-nodemanager-bigserver3.out
这一步对于新增节点来说,是不需要的。只是个人测试。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- xml创建节点(根节点、子节点)
- Vultr VPS 节点选择方法 | 各节点延迟一览
- 1.19 JQuery2:节点插入与节点选取
- POC分布式节点算法机制下的超级节点计划
- tikv节点下线缩容后改造成tidb节点记录
- Redis 哨兵节点之间相互自动发现机制(自动重写哨兵节点的配置文件)
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
破茧成蝶:用户体验设计师的成长之路
刘津、李月 / 人民邮电出版社 / 2014-7 / 69.00
市面上已经有很多专业的用户体验书籍,但解决用户体验设计师在职场中遇到的众多现实问题的图书并不多见。本书从用户体验设计师的角度出发,系统地介绍了其职业生涯中的学习方法、思维方式、工作流程等,覆盖了用户体验设计基础知识、设计师的角色和职业困惑、工作流程、需求分析、设计规划和设计标准、项目跟进和成果检验、设计师职业修养以及需要具备的意识等,力图帮助设计师解决在项目中遇到的一些常见问题,找到自己的职业成长......一起来看看 《破茧成蝶:用户体验设计师的成长之路》 这本书的介绍吧!