参考文档
https://blog.csdn.net/u012731379/article/details/79856113 https://blog.csdn.net/u010466329/article/details/79209236 https://blog.csdn.net/laoyang360/article/details/65449407
迁移方法
- 通过logstash的input和output配置迁移(配置灵活适用于长期数据同步等)
- 通过迁移 工具 如elasticdump等(适用于备份一次性小量数据操作)
- 通过elasticsarch自带快照功能(适用于一次性迁移大量数据)
操作步骤
1、安装elasticdump
[root@VM_8_24_centos ~]# yum install nodejs npm [root@VM_8_24_centos ~]# npm install elasticdump /root └─┬ elasticdump@4.4.0 ├── async@2.6.1 ├─┬ aws-sdk@2.400.0 │ ├─┬ buffer@4.9.1 │ │ ├── base64-js@1.3.0 │ │ └── isarray@1.0.0 │ ├── events@1.1.1 │ ├── ieee754@1.1.8 │ ├── jmespath@0.15.0 │ ├── querystring@0.2.0 │ ├── sax@1.2.1 │ ├─┬ url@0.10.3 │ │ └── punycode@1.3.2 │ ├── uuid@3.3.2 │ └─┬ xml2js@0.4.19 │ └── xmlbuilder@9.0.7 ├── aws4@1.8.0 ├── bytes@3.1.0 ├── decimal.js@10.0.2 ├── ini@1.3.5 ├─┬ JSONStream@1.3.5 │ ├── jsonparse@1.3.1 │ └── through@2.3.8 ├── lodash@4.17.11 ├── lossless-json@1.0.3 ├─┬ optimist@0.6.1 │ ├── minimist@0.0.10 │ └── wordwrap@0.0.3 ├─┬ request@2.88.0 │ ├── aws-sign2@0.7.0 │ ├── caseless@0.12.0 │ ├─┬ combined-stream@1.0.7 │ │ └── delayed-stream@1.0.0 │ ├── extend@3.0.2 │ ├── forever-agent@0.6.1 │ ├─┬ form-data@2.3.3 │ │ └── asynckit@0.4.0 │ ├─┬ har-validator@5.1.3 │ │ ├─┬ ajv@6.9.1 │ │ │ ├── fast-deep-equal@2.0.1 │ │ │ ├── fast-json-stable-stringify@2.0.0 │ │ │ ├── json-schema-traverse@0.4.1 │ │ │ └─┬ uri-js@4.2.2 │ │ │ └── punycode@2.1.1 │ │ └── har-schema@2.0.0 │ ├─┬ http-signature@1.2.0 │ │ ├── assert-plus@1.0.0 │ │ ├─┬ jsprim@1.4.1 │ │ │ ├── extsprintf@1.3.0 │ │ │ ├── json-schema@0.2.3 │ │ │ └── verror@1.10.0 │ │ └─┬ sshpk@1.16.1 │ │ ├── asn1@0.2.4 │ │ ├── bcrypt-pbkdf@1.0.2 │ │ ├── dashdash@1.14.1 │ │ ├── ecc-jsbn@0.1.2 │ │ ├── getpass@0.1.7 │ │ ├── jsbn@0.1.1 │ │ ├── safer-buffer@2.1.2 │ │ └── tweetnacl@0.14.5 │ ├── is-typedarray@1.0.0 │ ├── isstream@0.1.2 │ ├── json-stringify-safe@5.0.1 │ ├─┬ mime-types@2.1.21 │ │ └── mime-db@1.37.0 │ ├── oauth-sign@0.9.0 │ ├── performance-now@2.1.0 │ ├── qs@6.5.2 │ ├── safe-buffer@5.1.2 │ ├─┬ tough-cookie@2.4.3 │ │ ├── psl@1.1.31 │ │ └── punycode@1.4.1 │ └── tunnel-agent@0.6.0 ├─┬ requestretry@3.1.0 │ └── when@3.7.8 └─┬ s3-stream-upload@2.0.2 ├── buffer-queue@1.0.0 └─┬ readable-stream@2.3.6 ├── core-util-is@1.0.2 ├── inherits@2.0.3 ├── process-nextick-args@2.0.0 ├── string_decoder@1.1.1 └── util-deprecate@1.0.2 npm WARN enoent ENOENT: no such file or directory, open '/root/package.json' npm WARN root No description npm WARN root No repository field. npm WARN root No README data npm WARN root No license field. [root@VM_8_24_centos ~]# cd node_modules/elasticdump/bin
2、导出mapping
[root@VM_8_24_centos bin]# ./elasticdump --input=http://10.2.3.159:9200/reconciliation --output=http://10.2.100.24:9200/reconciliationonline --type=mapping Mon, 11 Feb 2019 09:28:10 GMT | starting dump Mon, 11 Feb 2019 09:28:10 GMT | got 1 objects from source elasticsearch (offset: 0) Mon, 11 Feb 2019 09:28:16 GMT | sent 1 objects to destination elasticsearch, wrote 1 Mon, 11 Feb 2019 09:28:16 GMT | got 0 objects from source elasticsearch (offset: 1) Mon, 11 Feb 2019 09:28:16 GMT | Total Writes: 1 Mon, 11 Feb 2019 09:28:16 GMT | dump complete [root@VM_8_24_centos bin]# curl 10.2.100.24:9200/_cat/indices|grep reconciliation % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 43 32578 43 14037 0 0 5694 0 0:00:05 0:00:02 0:00:03 5694yellow open reconciliationonline KnZrsU-7TmWWBs-1-dBRrw 5 1 0 0 1.1kb 1.1kb
3、导出data
[root@VM_8_24_centos bin]# ./elasticdump --input=http://10.2.3.159:9200/reconciliation --output=http://10.2.100.24:9200/reconciliationonline --type=data Mon, 11 Feb 2019 09:28:58 GMT | starting dump Mon, 11 Feb 2019 09:28:58 GMT | got 100 objects from source elasticsearch (offset: 0) Mon, 11 Feb 2019 09:29:19 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:29:19 GMT | got 100 objects from source elasticsearch (offset: 100) Mon, 11 Feb 2019 09:29:51 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:29:51 GMT | got 100 objects from source elasticsearch (offset: 200) Mon, 11 Feb 2019 09:30:13 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:30:13 GMT | got 100 objects from source elasticsearch (offset: 300) Mon, 11 Feb 2019 09:30:39 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:30:39 GMT | got 100 objects from source elasticsearch (offset: 400) Mon, 11 Feb 2019 09:31:53 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:31:53 GMT | got 100 objects from source elasticsearch (offset: 500) Mon, 11 Feb 2019 09:32:28 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:32:28 GMT | got 100 objects from source elasticsearch (offset: 600) Mon, 11 Feb 2019 09:33:24 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:33:24 GMT | got 100 objects from source elasticsearch (offset: 700) Mon, 11 Feb 2019 09:34:19 GMT | sent 100 objects to destination elasticsearch, wrote 100 Mon, 11 Feb 2019 09:34:19 GMT | got 7 objects from source elasticsearch (offset: 800) Mon, 11 Feb 2019 09:34:47 GMT | sent 7 objects to destination elasticsearch, wrote 7 Mon, 11 Feb 2019 09:34:47 GMT | got 0 objects from source elasticsearch (offset: 807) Mon, 11 Feb 2019 09:34:47 GMT | Total Writes: 807 Mon, 11 Feb 2019 09:34:47 GMT | dump complete [root@VM_8_24_centos bin]# curl 10.2.100.24:9200/_cat/indices|grep reconciliationonline % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0yellow open reconciliationonline KnZrsU-7TmWWBs-1-dBRrw 5 1 807 0 4.3mb 4.3mb 100 32578 100 32578 0 0 2010 0 0:00:16 0:00:16 --:--:-- 7769
以上所述就是小编给大家介绍的《使用elasticdump迁移数据到新es集群 荐》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- Redis 数据迁移至 Codis 集群方案
- Redis 数据迁移至 Codis 集群方案
- Redis源码解析:集群手动故障转移、从节点迁移详解
- 抛弃 Kafka 的 Zookeeper,不停机迁移到统一集群
- 银行核心海量数据无损迁移:TDSQL数据库多源异构迁移方案
- 再无需从头训练迁移学习模型!亚马逊开源迁移学习数据库 Xfer
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。