技术分享 | OpenShift网络之SDN

栏目: 编程工具 · 发布时间: 7年前

内容简介:在红帽主导的容器平台OpenShift中,默认使用了原生的SDN网络解决方案,这是专门为OpenShift开发的一套符合CNI标准的SDN Plugin,并采用了当下流行的OVS作为虚拟交换机。Overview一个基本的SDN系统一般包含管理面、控制面和数据(转发)面三部分,通俗来说,管理面的对外表现就是北向接口,拿Openshift中的SDN来说,它的北向接口就是CNI了,kubelet收到用户创建POD的请求后,会调用CRI接口去实现POD创建相关的工作,另一块就是调用CNI去完成网络相关的工作,当然

在红帽主导的容器平台OpenShift中,默认使用了原生的SDN网络解决方案,这是专门为OpenShift开发的一套符合CNI标准的SDN Plugin,并采用了当下流行的OVS作为虚拟交换机。

Overview

一个基本的SDN系统一般包含管理面、控制面和数据(转发)面三部分,通俗来说,管理面的对外表现就是北向接口,拿Openshift中的SDN来说,它的北向接口就是CNI了,kubelet收到用户创建POD的请求后,会调用CRI接口去实现POD创建相关的工作,另一块就是调用CNI去完成网络相关的工作,当然这里的CNI主要分成两块:

一部分是二进制文件,也就是kubelet直接执行该二进制文件并向其传递pod id、namespace等参数;

另一部分就是CNI Server,接收CNI Client的请求,并调用SDN Controller去完成OVS Bridge端口、流表相关的配置,为POD打通网络。

控制面一个主要载体就是南向接口了,当然管理面的一些具体实现也有可能通过南向接口实现(这里不做重点分析),对于一个DC领域SDN控制器而言,其南向协议占了很大一部分的,说道南向协议很多人可能想到是Openflow、netconf、XMPP、P4 runtime……,不过可能要让大家失望了,这里并没采用上面说的那些高大上的协议,有的只是CLI、ovs-vsctl、ovs-ofctl、iptables等我们以前常用的命令而已,通过对这些常用命令的函数化封装、调用就组成了今天要用到的SDN南向协议。

可能很多人要在心里面犯嘀咕了,仅仅通过几条命令的调用能不能胜任这份重任呢?答案当然是肯定的,没有一点问题,大名鼎鼎的OpenStack中的Neutron也是这么干的,Neutron中的默认方式也是直接调用的命令来实现对OVS的控制,所以放心用吧。

我们这里重点介绍一下该SDN方案的数据面实现模型,当SDN Controller收到北向CNI Server的网络请求后,是如何控制OVS进行流表的增删改查、以及iptables相关的操作。

数据面就是指各种转发设备了,包括传统的硬件厂家个各种路由器、交换机、防火墙、负载均衡等,也包括各种纯软件实现的是Vswitch、Vrouter,其中OVS作为其中的典型代表,借着OpenStack等云计算技术的发展着实火了一把,在Openshift的SDN方案中OVS作为数据面的转发载体,起着至关重要的作用。

北向接口

在一个基本的SDN系统中北向接口作为直接面向用户或者应用部分,一个主要的功能就是先理解用户的“语言”,这里的语言就是是CNI了,CNI也是容器平台一个重要的网络接口,基本动作包含:添加网络、删除网络、添加网络列表、删除网络列表。

在Openshift的SDN中,首先实现了一个CNI的client,编译后会产生一个二进制可执行文件,安装过程中一般会将该文件放置在/opt/cni/bin目录下,供kubelet调用。

在pkg/network/sdn-cni-plugin/openshift-sdn.go#line57文件中

// Send a CNI request to the CNI server via JSON + HTTP over a root-owned unix socket,

// and return the result

func (p *cniPlugin) doCNI(url string, req *cniserver.CNIRequest) ([]byte, error) {

data, err := json.Marshal(req)

if err != nil {

returnnil, fmt.Errorf("failed to marshal CNI request %v: %v", req, err)

}

client := &http.Client{

Transport: &http.Transport{

Dial: func(proto, addr string) (net.Conn, error) {

return net.Dial("unix", p.socketPath)

},

},

}

varresp *http.Response

err = p.hostNS.Do(func(ns.NetNS) error {

resp, err = client.Post(url, "application/json", bytes.NewReader(data))

return err

})

if err != nil {

returnnil, fmt.Errorf("failed to send CNI request: %v", err)

}

defer resp.Body.Close()

body, err := ioutil.ReadAll(resp.Body)

if err != nil {

returnnil, fmt.Errorf("failed to read CNI result: %v", err)

}

if resp.StatusCode != 200 {

returnnil, fmt.Errorf("CNI request failed with status %v: '%s'", resp.StatusCode, string(body))

}

return body, nil

}

// Send the ADD command environment and config to the CNI server, returning

// the IPAM result to the caller

func (p *cniPlugin) doCNIServerAdd(req *cniserver.CNIRequest, hostVeth string) (types.Result, error) {

req.HostVeth = hostVeth

body, err := p.doCNI(" http://dummy/" , req)

if err != nil {

returnnil, err

}

// We currently expect CNI version 0.2.0 results, because that's the

// CNIVersion we pass in our config JSON

result, err := types020.NewResult(body)

if err != nil {

returnnil, fmt.Errorf("failed to unmarshal response '%s': %v", string(body), err)

}

return result, nil

}

....

func (p *cniPlugin) CmdDel(args *skel.CmdArgs) error {

_, err := p.doCNI(" http://dummy/" , newCNIRequest(args))

return err

}

主要实现了CmdAdd、CmdDel两种类型的方法供kubelet调用,以完成POD创建或者删除时相关网络配置的变更。

其次就是实现了一个CNI的Server用来响应处理CNI客户端的请求,Server 跟Client之间通过一个unix类型的Socket以HTTP + JSON 的方式进行通信。

在Openshift中每个NODE启动时都会顺带启动一个CNI Server,在文件origin/pkg/network/node/pod.go#line170中

// Start the CNI server and start processing requests from it

func (m *podManager) Start(rundir string, localSubnetCIDR string, clusterNetworks []common.ClusterNetwork, serviceNetworkCIDR string) error {

if m.enableHostports {

iptInterface := utiliptables.New(utilexec.New(), utildbus.New(), utiliptables.ProtocolIpv4)

m.hostportSyncer = kubehostport.NewHostportSyncer(iptInterface)

}

varerrerror

ifm.ipamConfig, err = getIPAMConfig(clusterNetworks, localSubnetCIDR); err != nil {

return err

}

go m.processCNIRequests()

m.cniServer = cniserver.NewCNIServer(rundir, &cniserver.Config{MTU: m.mtu, ServiceNetworkCIDR: serviceNetworkCIDR})

return m.cniServer.Start(m.handleCNIRequest)

}

其中的line184的cniserver.NewCNIServer具体实现在文件origin/pkg/network/node/cniserver/cniserver.go#line120中

// Create and return a new CNIServer object which will listen on a socket in the given path

funcNewCNIServer(rundir string, config *Config) *CNIServer {

router := mux.NewRouter()

s := &CNIServer{

Server: http.Server{

Handler: router,

},

rundir: rundir,

config: config,

}

router.NotFoundHandler = http.HandlerFunc(http.NotFound)

router.HandleFunc("/", s.handleCNIRequest).Methods("POST")

return s

}

…….

// Start the CNIServer's local HTTP server on a root-owned Unix domain socket.

// requestFunc will be called to handle pod setup/teardown operations on each

// request to the CNIServer's HTTP server, and should return a PodResult

// when the operation has completed.

func (s *CNIServer) Start(requestFunc cniRequestFunc) error {

if requestFunc == nil {

return fmt.Errorf("no pod request handler")

}

s.requestFunc = requestFunc

CNIServer收到Client的请求后会继续调用后端的ovscontroller等文件完成具体的网络实现。

南向接口

就像上面所说的,在Openshift的SDN中,南向协议是直接通过调用CLI命令来实现的,下面我们简单看一下具体实现

origin/pkg/util/ovs/ovs.go#line140

....

const (

OVS_OFCTL = "ovs-ofctl"

OVS_VSCTL = "ovs-vsctl"

)

....

func (ovsif *ovsExec) execWithStdin(cmd string, stdinArgs []string, args ...string) (string, error) {

logLevel := glog.Level(4)

switch cmd {

case OVS_OFCTL:

if args[0] == "dump-flows" {

logLevel = glog.Level(5)

}

args = append([]string{"-O", "OpenFlow13"}, args...)

case OVS_VSCTL:

args = append([]string{"--timeout=30"}, args...)

}

kcmd := ovsif.execer.Command(cmd, args...)

if stdinArgs != nil {

stdinString := strings.Join(stdinArgs, "\n")

stdin := bytes.NewBufferString(stdinString)

kcmd.SetStdin(stdin)

glog.V(logLevel).Infof("Executing: %s %s <<\n%s", cmd, strings.Join(args, " "), stdinString)

} else {

glog.V(logLevel).Infof("Executing: %s %s", cmd, strings.Join(args, " "))

}

output, err := kcmd.CombinedOutput()

if err != nil {

glog.V(2).Infof("Error executing %s: %s", cmd, string(output))

return"", err

}

outStr := string(output)

if outStr != "" {

// If output is a single line, strip the trailing newline

nl := strings.Index(outStr, "\n")

if nl == len(outStr)-1 {

outStr = outStr[:nl]

}

}

return outStr, nil

}

这里主要对OVS中两个常用的命令ovs-vsctl、ovs-ifctl进行了函数化封装,后续的增加ovs端口、转发流表、QOS限速等,均可以通过调用该函数来实现对OVS配置,

origin/pkg/network/node/iptables.go#line90

// syncIPTableRules syncs the cluster network cidr iptables rules.

// Called from SyncLoop() or firewalld reload()

func (n *NodeIPTables) syncIPTableRules() error {

n.mu.Lock()

defer n.mu.Unlock()

start := time.Now()

deferfunc() {

glog.V(4).Infof("syncIPTableRules took %v", time.Since(start))

}()

glog.V(3).Infof("Syncing openshift iptables rules")

chains := n.getNodeIPTablesChains()

fori := len(chains) - 1; i >= 0; i-- {

chain := chains[i]

// Create chain if it does not already exist

chainExisted, err := n.ipt.EnsureChain(iptables.Table(chain.table), iptables.Chain(chain.name))

if err != nil {

return fmt.Errorf("failed to ensure chain %s exists: %v", chain.name, err)

}

if chain.srcChain != "" {

// Create the rule pointing to it from its parent chain. Note that since we

// use iptables.Prepend each time, but process the chains in reverse order,

// chains with the same table and srcChain (ie, OPENSHIFT-FIREWALL-FORWARD

// and OPENSHIFT-ADMIN-OUTPUT-RULES) will run in the same order as they

// appear in getNodeIPTablesChains().

_, err = n.ipt.EnsureRule(iptables.Prepend, iptables.Table(chain.table), iptables.Chain(chain.srcChain), append(chain.srcRule, "-j", chain.name)...)

if err != nil {

return fmt.Errorf("failed to ensure rule from %s to %s exists: %v", chain.srcChain, chain.name, err)

}

}

// Add/sync the rules

rulesExisted, err := n.addChainRules(chain)

if err != nil {

return err

}

if chainExisted && !rulesExisted {

// Chain existed but not with the expected rules; this probably means

// it contained rules referring to a different subnet; flush them

// and try again.

iferr = n.ipt.FlushChain(iptables.Table(chain.table), iptables.Chain(chain.name)); err != nil {

return fmt.Errorf("failed to flush chain %s: %v", chain.name, err)

}

if_, err = n.addChainRules(chain); err != nil {

return err

}

}

}

returnnil

}

这里主要是对iptables相关命令的封装,会在main函数中继续循环监听,同步SDN控制器下发的iptables rules并验证是否配置成功,当用户通过API创建一个Service服务时,ClusterIP相关的NAT映射、访问控制、外部网络访问均是通过调用iptables来实现。

初始化

当我们新创建一个集群或者增加一个物理节点时,SDN的Controller会对节点进行一些初始化的配置,主要包括ovs、iptables两部分

OVS

OVS的初始化主要包括网桥br0、port(tun0、vxlan)的创建、初始化流表的配置等。

我们这里重点说一下流表的模型,Openshift的SDN模型中用到的流表主要包括一下几个table:

// Table 0: initial dispatch based on in_port

// Table 10: VXLAN ingress filtering; filled in by AddHostSubnetRules()

// Table 20: from OpenShift container; validate IP/MAC, assign tenant-id; filled in by setupPodFlows

// Table 21: from OpenShift container; NetworkPolicy plugin uses this for connection tracking

// Table 25: IP from OpenShift container via Service IP; reload tenant-id; filled in by setupPodFlows

// Table 30: general routing

// Table 40: ARP to local container, filled in by setupPodFlows

// Table 50: ARP to remote container; filled in by AddHostSubnetRules()

// Table 60: IP to service from pod

// Table 70: IP to local container: vnid/port mappings; filled in by setupPodFlows

// Table 80: IP policy enforcement; mostly managed by the osdnPolicy

// Table 90: IP to remote container; filled in by AddHostSubnetRules()

// Table 100: egress routing; edited by UpdateNamespaceEgressRules()

// Table 101: egress network policy dispatch; edited by UpdateEgressNetworkPolicy()

// Table 110: outbound multicast filtering, updated by UpdateLocalMulticastFlows()

// Table 111: multicast delivery from local pods to the VXLAN; only one rule, updated by UpdateVXLANMulticastRules()

// Table 120: multicast delivery to local pods (either from VXLAN or local pods); updated by UpdateLocalMulticastFlows()

// Table 253: rule version note

调度关系如下:

origin/pkg/network/node/ovscontroller.go#line66

func (oc *ovsController) SetupOVS(clusterNetworkCIDR []string, serviceNetworkCIDR, localSubnetCIDR, localSubnetGateway string, mtu uint32) error {

....

err = oc.ovs.AddBridge("fail-mode=secure", "protocols=OpenFlow13")

if err != nil {

return err

....

_ = oc.ovs.DeletePort(Vxlan0)

_, err = oc.ovs.AddPort(Vxlan0, 1, "type=vxlan", options:remote_ip=&quot;flow&quot; , options:key=&quot;flow&quot; )

if err != nil {

return err

}

_ = oc.ovs.DeletePort(Tun0)

_, err = oc.ovs.AddPort(Tun0, 2, "type=internal", fmt.Sprintf("mtu_request=%d", mtu))

if err != nil {

return err

}

otx := oc.ovs.NewTransaction()

// Table 0: initial dispatch based on in_port

if oc.useConnTrack {

otx.AddFlow("table=0, priority=300, ip, ct_state=-trk, actions=ct(table=0)")

}

// vxlan0

for_, clusterCIDR := range clusterNetworkCIDR {

otx.AddFlow("table=0, priority=200, in_port=1, arp, nw_src=%s, nw_dst=%s, actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10", clusterCIDR, localSubnetCIDR)

otx.AddFlow("table=0, priority=200, in_port=1, ip, nw_src=%s, actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10", clusterCIDR)

otx.AddFlow("table=0, priority=200, in_port=1, ip, nw_dst=%s, actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10", clusterCIDR)

}

otx.AddFlow("table=0, priority=150, in_port=1, actions=drop")

// tun0

if oc.useConnTrack {

otx.AddFlow("table=0, priority=400, in_port=2, ip, nw_src=%s, actions=goto_table:30", localSubnetGateway)

for_, clusterCIDR := range clusterNetworkCIDR {

otx.AddFlow("table=0, priority=300, in_port=2, ip, nw_src=%s, nw_dst=%s, actions=goto_table:25", localSubnetCIDR, clusterCIDR)

}

}

otx.AddFlow("table=0, priority=250, in_port=2, ip, nw_dst=224.0.0.0/4, actions=drop")

for_, clusterCIDR := range clusterNetworkCIDR {

otx.AddFlow("table=0, priority=200, in_port=2, arp, nw_src=%s, nw_dst=%s, actions=goto_table:30", localSubnetGateway, clusterCIDR)

}

otx.AddFlow("table=0, priority=200, in_port=2, ip, actions=goto_table:30")

otx.AddFlow("table=0, priority=150, in_port=2, actions=drop")

// else, from a container

otx.AddFlow("table=0, priority=100, arp, actions=goto_table:20")

otx.AddFlow("table=0, priority=100, ip, actions=goto_table:20")

otx.AddFlow("table=0, priority=0, actions=drop")

// Table 10: VXLAN ingress filtering; filled in by AddHostSubnetRules()

// eg, "table=10, priority=100, tun_src=${remote_node_ip}, actions=goto_table:30"

otx.AddFlow("table=10, priority=0, actions=drop")

// (${tenant_id} is always 0 for single-tenant)

otx.AddFlow("table=20, priority=0, actions=drop")

// Table 21: from OpenShift container; NetworkPolicy plugin uses this for connection tracking

otx.AddFlow("table=21, priority=0, actions=goto_table:30")

// Table 253: rule version note

otx.AddFlow("table=%d, actions=note:%s", ruleVersionTable, oc.getVersionNote())

return otx.Commit()

}

IPtables

主要涉及Cluster Network、Service Network 网段相关的rule规则、NAT以及vxlan流量(udp.port 4789)的安全策略。

origin/pkg/network/node/iptables.go#line146

func (n *NodeIPTables) getNodeIPTablesChains() []Chain {

varchainArray []Chain

chainArray = append(chainArray,

Chain{

table: "filter",

name: "OPENSHIFT-FIREWALL-ALLOW",

srcChain: "INPUT",

srcRule: []string{"-m", "comment", "--comment", "firewall overrides"},

rules: [][]string{

{"-p", "udp", "--dport", vxlanPort, "-m", "comment", "--comment", "VXLAN incoming", "-j", "ACCEPT"},

{"-i", Tun0, "-m", "comment", "--comment", "from SDN to localhost", "-j", "ACCEPT"},

{"-i", "docker0", "-m", "comment", "--comment", "from docker to localhost", "-j", "ACCEPT"},

},

},

Chain{

table: "filter",

name: "OPENSHIFT-ADMIN-OUTPUT-RULES",

srcChain: "FORWARD",

srcRule: []string{"-i", Tun0, "!", "-o", Tun0, "-m", "comment", "--comment", "administrator overrides"},

rules: nil,

},

)

varmasqRules [][]string

varmasq2Rules [][]string

varfilterRules [][]string

for_, cidr := range n.clusterNetworkCIDR {

if n.masqueradeServices {

masqRules = append(masqRules, []string{"-s", cidr, "-m", "comment", "--comment", "masquerade pod-to-service and pod-to-external traffic", "-j", "MASQUERADE"})

} else {

masqRules = append(masqRules, []string{"-s", cidr, "-m", "comment", "--comment", "masquerade pod-to-external traffic", "-j", "OPENSHIFT-MASQUERADE-2"})

masq2Rules = append(masq2Rules, []string{"-d", cidr, "-m", "comment", "--comment", "masquerade pod-to-external traffic", "-j", "RETURN"})

}

filterRules = append(filterRules, []string{"-s", cidr, "-m", "comment", "--comment", "attempted resend after connection close", "-m", "conntrack", "--ctstate", "INVALID", "-j", "DROP"})

filterRules = append(filterRules, []string{"-d", cidr, "-m", "comment", "--comment", "forward traffic from SDN", "-j", "ACCEPT"})

filterRules = append(filterRules, []string{"-s", cidr, "-m", "comment", "--comment", "forward traffic to SDN", "-j", "ACCEPT"})

}

chainArray = append(chainArray,

Chain{

table: "nat",

name: "OPENSHIFT-MASQUERADE",

srcChain: "POSTROUTING",

srcRule: []string{"-m", "comment", "--comment", "rules for masquerading OpenShift traffic"},

rules: masqRules,

},

Chain{

table: "filter",

name: "OPENSHIFT-FIREWALL-FORWARD",

srcChain: "FORWARD",

srcRule: []string{"-m", "comment", "--comment", "firewall overrides"},

rules: filterRules,

},

)

if !n.masqueradeServices {

masq2Rules = append(masq2Rules, []string{"-j", "MASQUERADE"})

chainArray = append(chainArray,

Chain{

table: "nat",

name: "OPENSHIFT-MASQUERADE-2",

rules: masq2Rules,

},

)

}

return chainArray

}

举个例子

POD Add

OVS

当我们新增加一个POD时,会调用底层的CRI接口例如 docker 来创建POD容器,Kubelet在创建pod时是先创建一个infra容器,配置好该容器的网络,然后创建真正工作的业务容器,最后再把业务容器的网络加到infra容器的网络命名空间中,业务容器和infra容器共同组成一个pod。当kubelet创建好infra容器后,会去调用network-plugin,并将infra的namespace做为传入参数开始网络创建流程,在这里会去调用/opt/cni/bin/openshift-sdnSDN,由该二进制文件充当CNI的客户端向CNI Server发起请求,之后控制器会下发命令完成OVS新增加Port、POD IP转发相关的流标的配置。

origin/pkg/network/node/ovscontroller.go#line266

func (oc *ovsController) SetUpPod(sandboxID, hostVeth string, podIP net.IP, vnid uint32) (int, error) {

ofport, err := oc.ensureOvsPort(hostVeth, sandboxID, podIP.String())

if err != nil {

return -1, err

}

return ofport, oc.setupPodFlows(ofport, podIP, vnid)

}

....

func (oc *ovsController) setupPodFlows(ofport int, podIP net.IP, vnid uint32) error {

otx := oc.ovs.NewTransaction()

ipstr := podIP.String()

podIP = podIP.To4()

ipmac := fmt.Sprintf("00:00:x:x:x:x/00:00:ff:ff:ff:ff", podIP[0], podIP[1], podIP[2], podIP[3])

// ARP/IP traffic from container

otx.AddFlow("table=20, priority=100, in_port=%d, arp, nw_src=%s, arp_sha=%s, actions=load:%d->NXM_NX_REG0[], goto_table:21", ofport, ipstr, ipmac, vnid)

otx.AddFlow("table=20, priority=100, in_port=%d, ip, nw_src=%s, actions=load:%d->NXM_NX_REG0[], goto_table:21", ofport, ipstr, vnid)

if oc.useConnTrack {

otx.AddFlow("table=25, priority=100, ip, nw_src=%s, actions=load:%d->NXM_NX_REG0[], goto_table:30", ipstr, vnid)

}

// ARP request/response to container (not isolated)

otx.AddFlow("table=40, priority=100, arp, nw_dst=%s, actions=output:%d", ipstr, ofport)

// IP traffic to container

otx.AddFlow("table=70, priority=100, ip, nw_dst=%s, actions=load:%d->NXM_NX_REG1[], load:%d->NXM_NX_REG2[], goto_table:80", ipstr, vnid, ofport)

return otx.Commit()

}

IPtables

origin/pkg/network/node/iptables.go#line216

func (n *NodeIPTables) AddEgressIPRules(egressIP, mark string) error {

for_, cidr := range n.clusterNetworkCIDR {

_, err := n.ipt.EnsureRule(iptables.Prepend, iptables.TableNAT, iptables.Chain("OPENSHIFT-MASQUERADE"), "-s", cidr, "-m", "mark", "--mark", mark, "-j", "SNAT", "--to-source", egressIP)

if err != nil {

return err

}

}

_, err := n.ipt.EnsureRule(iptables.Append, iptables.TableFilter, iptables.Chain("OPENSHIFT-FIREWALL-ALLOW"), "-d", egressIP, "-m", "conntrack", "--ctstate", "NEW", "-j", "REJECT")

return err

}

POD delete

OVS

当一个POD别删除时,相应的SDN 控制器会调用相关删除函数,经对应的流标删除

origin/pkg/network/node/ovscontroller.go#line256

func (oc *ovsController) cleanupPodFlows(podIP net.IP) error {

ipstr := podIP.String()

otx := oc.ovs.NewTransaction()

otx.DeleteFlows("ip, nw_dst=%s", ipstr)

otx.DeleteFlows("ip, nw_src=%s", ipstr)

otx.DeleteFlows("arp, nw_dst=%s", ipstr)

otx.DeleteFlows("arp, nw_src=%s", ipstr)

return otx.Commit()

}

func (oc *ovsController) DeleteServiceRules(service *kapi.Service) error {

otx := oc.ovs.NewTransaction()

otx.DeleteFlows(generateBaseServiceRule(service.Spec.ClusterIP))

return otx.Commit()

}

IPtables

origin/pkg/network/node/iptables.go#line227

func (n *NodeIPTables) DeleteEgressIPRules(egressIP, mark string) error {

for_, cidr := range n.clusterNetworkCIDR {

err := n.ipt.DeleteRule(iptables.TableNAT, iptables.Chain("OPENSHIFT-MASQUERADE"), "-s", cidr, "-m", "mark", "--mark", mark, "-j", "SNAT", "--to-source", egressIP)

if err != nil {

return err

}

}

return n.ipt.DeleteRule(iptables.TableFilter, iptables.Chain("OPENSHIFT-FIREWALL-ALLOW"), "-d", egressIP, "-m", "conntrack", "--ctstate", "NEW", "-j", "REJECT")

}

Project Add

origin/pkg/network/node/vnids.go#line137

func (vmap *nodeVNIDMap) setVNID(name string, id uint32, mcEnabled bool) {

vmap.lock.Lock()

defer vmap.lock.Unlock()

ifoldId, found := vmap.ids[name]; found {

vmap.removeNamespaceFromSet(name, oldId)

}

vmap.ids[name] = id

vmap.mcEnabled[name] = mcEnabled

vmap.addNamespaceToSet(name, id)

glog.Infof("Associate netid %d to namespace %q with mcEnabled %v", id, name, mcEnabled)

}

Project Delete

origin/pkg/network/node/vnids.go#line137

func (vmap *nodeVNIDMap) unsetVNID(name string) (id uint32, err error) {

vmap.lock.Lock()

defer vmap.lock.Unlock()

id, found := vmap.ids[name]

if !found {

return0, fmt.Errorf("failed to find netid for namespace: %s in vnid map", name)

}

vmap.removeNamespaceFromSet(name, id)

delete(vmap.ids, name)

delete(vmap.mcEnabled, name)

glog.Infof("Dissociate netid %d from namespace %q", id, name)

return id, nil

}

通过上面的源码我们可以看到当进行租户项目级别的创建或者删除时,主要对VNI进行了配置调整,在Openshift的多租户网络模型中,控制器为每一个项目分配了一个VNID用来标识租户项目并实现了不同租户项目之间的隔离,当然也可以通过命令将不同项目join在一起实现两个不同项目之间的互访,底层主要通过table 80 来实现隔离或者互通。

这里只是针对基本原理以及部分源码做了一些粗浅的分析,接下来的章节会针对具体实例做分析,欢迎大家批评指正!

【推荐阅读】

《OpenFlow(OVS)下的“路由技术”》

《OpenShift源码简析之pod网络配置(上)》

《openshift源码简析之pod网络配置(下)》

优云数智介绍

优云数智(上海优铭云计算有限公司)是一家专注于提供企业级私有云产品与解决方案的云计算厂商,提供PaaS+IaaS的一站式解决方案。优云数智的母公司是中国中立的公有云服务商UCloud。私有云技术来源于全球顶尖的OpenStack、Ceph、Kubernetes云计算开发团队。


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

深入解析Spring MVC与Web Flow

深入解析Spring MVC与Web Flow

Seth Ladd、Darren Davison、Steven Devijver、Colin Yates / 徐哲、沈艳 / 人民邮电出版社 / 2008-11 / 49.00元

《深入解析Spring MVCgn Web Flow》是Spring MVC 和Web Flow 两个框架的权威指南,书中包括的技巧和提示可以让你从这个灵活的框架中汲取尽可能多的信息。书中包含了一些开发良好设计和解耦的Web 应用程序的最佳实践,介绍了Spring 框架中的Spring MVC 和Spring Web Flow,以及着重介绍利用Spring 框架和Spring MVC 编写Web ......一起来看看 《深入解析Spring MVC与Web Flow》 这本书的介绍吧!

SHA 加密
SHA 加密

SHA 加密工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具