深入分析Kubernetes DaemonSet Controller 原 荐

栏目: 编程工具 · 发布时间: 5年前

内容简介:Author:摘要:DaemonSet是Kubernetes中用户最常用的对象之一,我们用它来部署Nodes上守护应用,比如日志组件、节点监控组件等。从用户的使用角度来讲,DaemonSet看似简单,但实际上它涉及的点非常多,比如DaemonSet Pod满足什么条件才能在Node上运行、Node出现MemoryPressure或者其他异常Condition时是否能运行、调度的逻辑是怎样的、滚动更新的逻辑是怎样的等等,本文讲从DaemonSet Controller的源码着手,分析其中关键逻辑。Daemon

Author: xidianwangtao@gmail.com | Version: Kubernetes 1.13

摘要:DaemonSet是Kubernetes中用户最常用的对象之一,我们用它来部署Nodes上守护应用,比如日志组件、节点监控组件等。从用户的使用角度来讲,DaemonSet看似简单,但实际上它涉及的点非常多,比如DaemonSet Pod满足什么条件才能在Node上运行、Node出现MemoryPressure或者其他异常Condition时是否能运行、调度的逻辑是怎样的、滚动更新的逻辑是怎样的等等,本文讲从DaemonSet Controller的源码着手,分析其中关键逻辑。

DaemonSet Controller

DaemonSet Controller Struct

DaemonSet Controller的核心结构包括:

  • burstReplcas int : 每次sync时,Create和Delete Pods的数量上限,代码中写死为250。
  • queue workqueue.RateLimitingInterface : 存放待同步DaemonSet Key(namespaces/name)的Delaying Queue。
  • syncHandler func(dsKey string) error : 负责同步DaemonSet Queue中对象,包括Replicas管理、UpdateStrategy升级、更新DaemonSet Status等工作,是DaemonSet Controller中最核心的逻辑。
  • expectations controller.ControllerExpectationsInterface : 维护每个DaemonSet对象每次Sync期望Create/Delete Pods数的TTLCache。
  • suspendedDaemonPods map[string]sets.String : key为NodeName,value是DaemonSet集合,这些DaemonSet包含该Node上'wantToRun & !shouldSchedule'的Pod。
    • wantToRun: 为True,当DaemonSet Controller去Simulate调度时,Predicate(主要是GeneralPredicates和PodToleratesNodeTaints)时忽略如下PredicateFailureError(都是些资源类的Error)时成功,有其他PredicateFailureError为False。如果DaemonSet的Spec中指定了NodeName,则根据其是否与node.Name匹配成功来决定wantToRun的值。
      • ErrDiskConflict;
      • ErrVolumeZoneConflict;
      • ErrMaxVolumeCountExceeded;
      • ErrNodeUnderMemoryPressure;
      • ErrNodeUnderDiskPressure;
      • InsufficientResourceError;
    • shouldSchedule:
      • 如果DaemonSet的Spec中指定了NodeName,则根据其是否与node.Name匹配成功来决定shouldSchedule的值。
      • 如果Predicate时出现所有类型的PredicateFailureError之一,则shouldSchedule都为false。
      • 如果出现InsufficientResourceError,则shouldSchedule也为false。
  • failedPodsBackoff *flowcontrol.Backoff : DaemonSet Controller Run时会启动一个协程,每隔 2*MaxDuration(2*15Min) 会强制进行一次failedPods GC清理。每次syncDaemonSet处理该删除的Pods时,会按照1s,2s,4s,8s,.....15min的Backoff机制做一定的delay处理,实现流控的效果。防止kubelet拒绝某些DaemonSet Pods后,马上又被拒绝,如此就会出现很多无效的循环,因此加入了Backoff机制。

DaemonSet Controller的创建和启动

NewDaemonSetsController负责创建Controller,其中很重要的工作就是注册以下Informer的EventHandler:

  • daemonSetInformer: AddFunc/DeleteFunc/UpdateFunc最终其实都主要是enqueue DaemonSet;
  • historyInformer:
    • AddFunc: addHistory;
    • UpdateFunc: updateHistory;
    • DeleteFunc: deleteHistory;
  • podInformer:
    • AddFunc: addPod;
    • UpdateFunc: updatePod;
    • DeleteFunc: deletePod;
  • nodeInformer:
    • AddFunc: addNode;
    • UpdateFunc: updateNode;

DamonSet Controller Run启动时,主要干两件事:

  • 启动2个workers协程,每个worker负责从queue中取DaemonSet Key进行sync。

  • 启动1个failedPodsBackoff GC协程,每隔1Min清理一次集群中所有DaemonSet/Node对应的Failed Pods。

只有deletePod时,才会requeueSuspendedDaemonPods。-- 为什么?

DaemonSet的同步

worker会从queue中取待同步的DamonSet Key,调用syncDaemonSet完成自动管理,syncDaemonSet是DaemonSet管理的核心入口。

pkg/controller/daemon/daemon_controller.go:1208

func (dsc *DaemonSetsController) syncDaemonSet(key string) error {
	...
	ds, err := dsc.dsLister.DaemonSets(namespace).Get(name)
	if errors.IsNotFound(err) {
		klog.V(3).Infof("daemon set has been deleted %v", key)
		dsc.expectations.DeleteExpectations(key)
		return nil
	}
	if err != nil {
		return fmt.Errorf("unable to retrieve ds %v from store: %v", key, err)
	}

	everything := metav1.LabelSelector{}
	if reflect.DeepEqual(ds.Spec.Selector, &everything) {
		dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, SelectingAllReason, "This daemon set is selecting all pods. A non-empty selector is required.")
		return nil
	}

	// Don't process a daemon set until all its creations and deletions have been processed.
	// For example if daemon set foo asked for 3 new daemon pods in the previous call to manage,
	// then we do not want to call manage on foo until the daemon pods have been created.
	...
	if ds.DeletionTimestamp != nil {
		return nil
	}

	// Construct histories of the DaemonSet, and get the hash of current history
	cur, old, err := dsc.constructHistory(ds)
	if err != nil {
		return fmt.Errorf("failed to construct revisions of DaemonSet: %v", err)
	}
	hash := cur.Labels[apps.DefaultDaemonSetUniqueLabelKey]

	if !dsc.expectations.SatisfiedExpectations(dsKey) {
		// Only update status. Don't raise observedGeneration since controller didn't process object of that generation.
		return dsc.updateDaemonSetStatus(ds, hash, false)
	}

	err = dsc.manage(ds, hash)
	if err != nil {
		return err
	}

	// Process rolling updates if we're ready.
	if dsc.expectations.SatisfiedExpectations(dsKey) {
		switch ds.Spec.UpdateStrategy.Type {
		case apps.OnDeleteDaemonSetStrategyType:
		case apps.RollingUpdateDaemonSetStrategyType:
			err = dsc.rollingUpdate(ds, hash)
		}
		if err != nil {
			return err
		}
	}

	err = dsc.cleanupHistory(ds, old)
	if err != nil {
		return fmt.Errorf("failed to clean up revisions of DaemonSet: %v", err)
	}

	return dsc.updateDaemonSetStatus(ds, hash, true)
}

核心的流程如下:

  • 首先检查该DaemonSet对象在本地Store中是否被删除,如果是,则从expectations中删除该DaemonSet对应的数据。
  • 检查该DaemonSet对象的LabelSelector是否为空,如果是,则syncDaemonSet返回结束,不进行同步,那么DaemonSet对应的Pod也不会被创建了。
  • 如果其DeletionTimestamp非空,意味着用户触发了删除,则syncDaemonSet返回结束,不进行同步。DaemonSet对应的Pod交由GC Controller去完成删除。
  • 然后constructHistory获取该DaemonSet的Current ControllerRevision和所有Old ControllerRevisions,并确保所有ControllerRevisions都打上Label: "controller-revision-hash: ControllerRevision.Name",更新Current ControllerRevision的Revision = maxRevision(old) + 1。
  • 检查当前expectations是否已经满足,当不满足时,只更新DaemonSet Status,同步流程结束。
    • expectations中add和del都不大于0,表示Controller expectations已经实现,则当前expectations已经满足。
    • expectations已经超时,超时时间是5min(不可配置),如果超时,则表示需要进行同步。
    • 如果expectations中还没有该DaemonSet的信息,则表示也满足了,将触发DaemonSet同步。
    • 此处updateDaemonSetStatus会更新该Daemonset.Status的如下字段,注意不会更新ObservedGeneration(也没发生变化)。
      controller-revision-hash
      
  • 调用manage进行DaemonSet Pod的管理:计算待删除和创建的Pod列表,然后调用syncNodes分批次(1,2,4,8,..)的完成Pod的创建和删除。如果syncNodes之前发现某些Node上对应DaemonSet Pod是Failed,那么syncNodes后返回error。syncNode会将expectations中的add/del都归零甚至负数,只有这样,才会在syncDaemonSet中调用manage进行Pod管理。
  • 如果manage返回error,则syncDaemonSet流程结束。否则会继续下面的流程。
  • 检查当前expectations是否已经满足,如果满足,则根据UpdateStrategy触发DaemonSet更新:
    • 如果UpdateStrategy是OnDelete,则等待用户delete Pod,触发对应的DaemonSet的enqueue,在syncNodes时更新最新的Pod Template创建新Pod。
    • 如果UpdateStrategy是RollingUpdate,则调用rollingUpdate进行滚动更新,后面会详细分析。
  • 如果DaemonSet更新成功,则根据需要(Old ControllerRevisions数量是否超过Spec.RevisionHistoryLimit,默认为10)清理超过RevisionHistoryLimit的最老的ControllerRevisions。
  • updateDaemonSetStatus会更新该Daemonset.Status,跟前面不同的是,这里还需要更新Status.ObservedGeneration。

DaemonSet Pod的调度

在Kubernetes 1.12之前的版本中,默认由DaemonSet Controller完成Daemon Pods的调度工作,即由DaemonSet Controller给待调度Pod的 spec.nodeName 设置值,然后对应Node的kubelet watch到该事件,再在本节点创建DaemonSet Pod。在Kubernetes 1.12+,默认启用了 ScheduleDaemonSetPods FeatureGate, DaemonSet的调度就交由default scheduler完成。

DamonSet Pods Should Be On Node

在manage daemonset时,通过调用 podsShouldBeOnNode 来计算出希望在该Node上启动的DaemonSet Pods(nodesNeedingDaemonPods)、希望在该Node上删除的DaemonSet Pods(podsToDelete),以及在该Node上已经Failed DamonSetPods数量,然后在syncNodes中根据这三个信息,去创建、删除对应的Pods。

func (dsc *DaemonSetsController) manage(ds *apps.DaemonSet, hash string) error {
	// Find out the pods which are created for the nodes by DaemonSet.
	nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ds)
	...
	for _, node := range nodeList {
		nodesNeedingDaemonPodsOnNode, podsToDeleteOnNode, failedPodsObservedOnNode, err := dsc.podsShouldBeOnNode(
			node, nodeToDaemonPods, ds)

		if err != nil {
			continue
		}

		nodesNeedingDaemonPods = append(nodesNeedingDaemonPods, nodesNeedingDaemonPodsOnNode...)
		podsToDelete = append(podsToDelete, podsToDeleteOnNode...)
		failedPodsObserved += failedPodsObservedOnNode
	}

	// Label new pods using the hash label value of the current history when creating them
	if err = dsc.syncNodes(ds, podsToDelete, nodesNeedingDaemonPods, hash); err != nil {
		return err
	}

	...

	return nil
}

podsShouldBeOnNode 是如何计算出nodesNeedingDaemonPods、podsToDelete、failedPodsObserved的呢?—— 通过调用 nodeShouldRunDaemonPod(node *v1.Node, ds *apps.DaemonSet) 计算出如下三个状态值:

  • wantToRun: 当DaemonSet Controller去Simulate调度时,Predicate(主要是GeneralPredicates和PodToleratesNodeTaints)时忽略如下PredicateFailureError(都是些资源类的Error)时为True,有其他PredicateFailureError为False。如果DaemonSet的Spec中指定了NodeName,则根据其是否与node.Name匹配成功来决定wantToRun的值。 - ErrDiskConflict; - ErrVolumeZoneConflict; - ErrMaxVolumeCountExceeded; - ErrNodeUnderMemoryPressure; - ErrNodeUnderDiskPressure; - InsufficientResourceError;
  • shouldSchedule:
    - 如果DaemonSet的Spec中指定了NodeName,则根据其是否与node.Name匹配成功来决定shouldSchedule的值。 - 如果Predicate时出现所有类型的PredicateFailureError之一,则shouldSchedule都为false。 - 如果出现InsufficientResourceError,则shouldSchedule也为false。
  • failedPodsBackoff *flowcontrol.Backoff : 按照1s,2s,4s,8s,...的backoff周期去处理(删除重建)Failed DaemonSet Pods,实现流控的效果。DaemonSet Controller Run时会启动一个协程,每隔 2*MaxDuration(2*15Min) 会强制进行一次failedPods GC清理。
  • shouldContinueRunning,如下情况之一出现,则该值为false,其他情况为true。
    • ErrNodeSelectorNotMatch,
    • ErrPodNotMatchHostName,
    • ErrNodeLabelPresenceViolated,
    • ErrPodNotFitsHostPorts:
    • ErrTaintsTolerationsNotMatch,如果是No Execute类型的Taint/Toleration匹配,则为true,否则为false,也就是说会忽略NoExecute类型的Taint/Toleration匹配。
    • ErrPodAffinityNotMatch,
    • ErrServiceAffinityViolated,
    • unknown predicate failure reason

然后根据这三个状态值,得到 nodesNeedingDaemonPods []string、podsToDelete []string、failedPodsObserved int

// podsShouldBeOnNode figures out the DaemonSet pods to be created and deleted on the given node:
func (dsc *DaemonSetsController) podsShouldBeOnNode(
	node *v1.Node,
	nodeToDaemonPods map[string][]*v1.Pod,
	ds *apps.DaemonSet,
) (nodesNeedingDaemonPods, podsToDelete []string, failedPodsObserved int, err error) {

	wantToRun, shouldSchedule, shouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(node, ds)
	if err != nil {
		return
	}

	daemonPods, exists := nodeToDaemonPods[node.Name]
	dsKey, _ := cache.MetaNamespaceKeyFunc(ds)

	dsc.removeSuspendedDaemonPods(node.Name, dsKey)

	switch {
	case wantToRun && !shouldSchedule:
		// If daemon pod is supposed to run, but can not be scheduled, add to suspended list.
		dsc.addSuspendedDaemonPods(node.Name, dsKey)
	case shouldSchedule && !exists:
		// If daemon pod is supposed to be running on node, but isn't, create daemon pod.
		nodesNeedingDaemonPods = append(nodesNeedingDaemonPods, node.Name)
	case shouldContinueRunning:
		// If a daemon pod failed, delete it
		// If there's non-daemon pods left on this node, we will create it in the next sync loop
		var daemonPodsRunning []*v1.Pod
		for _, pod := range daemonPods {
			if pod.DeletionTimestamp != nil {
				continue
			}
			if pod.Status.Phase == v1.PodFailed {
				failedPodsObserved++

				// This is a critical place where DS is often fighting with kubelet that rejects pods.
				// We need to avoid hot looping and backoff.
				backoffKey := failedPodsBackoffKey(ds, node.Name)

				now := dsc.failedPodsBackoff.Clock.Now()
				inBackoff := dsc.failedPodsBackoff.IsInBackOffSinceUpdate(backoffKey, now)
				if inBackoff {
					delay := dsc.failedPodsBackoff.Get(backoffKey)
					klog.V(4).Infof("Deleting failed pod %s/%s on node %s has been limited by backoff - %v remaining",
						pod.Namespace, pod.Name, node.Name, delay)
					dsc.enqueueDaemonSetAfter(ds, delay)
					continue
				}

				dsc.failedPodsBackoff.Next(backoffKey, now)

				msg := fmt.Sprintf("Found failed daemon pod %s/%s on node %s, will try to kill it", pod.Namespace, pod.Name, node.Name)
				klog.V(2).Infof(msg)
				// Emit an event so that it's discoverable to users.
				dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedDaemonPodReason, msg)
				podsToDelete = append(podsToDelete, pod.Name)
			} else {
				daemonPodsRunning = append(daemonPodsRunning, pod)
			}
		}
		// If daemon pod is supposed to be running on node, but more than 1 daemon pod is running, delete the excess daemon pods.
		// Sort the daemon pods by creation time, so the oldest is preserved.
		if len(daemonPodsRunning) > 1 {
			sort.Sort(podByCreationTimestampAndPhase(daemonPodsRunning))
			for i := 1; i < len(daemonPodsRunning); i++ {
				podsToDelete = append(podsToDelete, daemonPodsRunning[i].Name)
			}
		}
	case !shouldContinueRunning && exists:
		// If daemon pod isn't supposed to run on node, but it is, delete all daemon pods on node.
		for _, pod := range daemonPods {
			podsToDelete = append(podsToDelete, pod.Name)
		}
	}

	return nodesNeedingDaemonPods, podsToDelete, failedPodsObserved, nil
}

// nodeShouldRunDaemonPod checks a set of preconditions against a (node,daemonset) and returns a summary. 
func (dsc *DaemonSetsController) nodeShouldRunDaemonPod(node *v1.Node, ds *apps.DaemonSet) (wantToRun, shouldSchedule, shouldContinueRunning bool, err error) {
	newPod := NewPod(ds, node.Name)

	// Because these bools require an && of all their required conditions, we start
	// with all bools set to true and set a bool to false if a condition is not met.
	// A bool should probably not be set to true after this line.
	wantToRun, shouldSchedule, shouldContinueRunning = true, true, true
	// If the daemon set specifies a node name, check that it matches with node.Name.
	if !(ds.Spec.Template.Spec.NodeName == "" || ds.Spec.Template.Spec.NodeName == node.Name) {
		return false, false, false, nil
	}

	reasons, nodeInfo, err := dsc.simulate(newPod, node, ds)
	if err != nil {
		klog.Warningf("DaemonSet Predicates failed on node %s for ds '%s/%s' due to unexpected error: %v", node.Name, ds.ObjectMeta.Namespace, ds.ObjectMeta.Name, err)
		return false, false, false, err
	}

	// TODO(k82cn): When 'ScheduleDaemonSetPods' upgrade to beta or GA, remove unnecessary check on failure reason,
	//              e.g. InsufficientResourceError; and simplify "wantToRun, shouldSchedule, shouldContinueRunning"
	//              into one result, e.g. selectedNode.
	var insufficientResourceErr error
	for _, r := range reasons {
		klog.V(4).Infof("DaemonSet Predicates failed on node %s for ds '%s/%s' for reason: %v", node.Name, ds.ObjectMeta.Namespace, ds.ObjectMeta.Name, r.GetReason())
		switch reason := r.(type) {
		case *predicates.InsufficientResourceError:
			insufficientResourceErr = reason
		case *predicates.PredicateFailureError:
			var emitEvent bool
			// we try to partition predicates into two partitions here: intentional on the part of the operator and not.
			switch reason {
			// intentional
			case
				predicates.ErrNodeSelectorNotMatch,
				predicates.ErrPodNotMatchHostName,
				predicates.ErrNodeLabelPresenceViolated,
				// this one is probably intentional since it's a workaround for not having
				// pod hard anti affinity.
				predicates.ErrPodNotFitsHostPorts:
				return false, false, false, nil
			case predicates.ErrTaintsTolerationsNotMatch:
				// DaemonSet is expected to respect taints and tolerations
				fitsNoExecute, _, err := predicates.PodToleratesNodeNoExecuteTaints(newPod, nil, nodeInfo)
				if err != nil {
					return false, false, false, err
				}
				if !fitsNoExecute {
					return false, false, false, nil
				}
				wantToRun, shouldSchedule = false, false
			// unintentional
			case
				predicates.ErrDiskConflict,
				predicates.ErrVolumeZoneConflict,
				predicates.ErrMaxVolumeCountExceeded,
				predicates.ErrNodeUnderMemoryPressure,
				predicates.ErrNodeUnderDiskPressure:
				// wantToRun and shouldContinueRunning are likely true here. They are
				// absolutely true at the time of writing the comment. See first comment
				// of this method.
				shouldSchedule = false
				emitEvent = true
			// unexpected
			case
				predicates.ErrPodAffinityNotMatch,
				predicates.ErrServiceAffinityViolated:
				klog.Warningf("unexpected predicate failure reason: %s", reason.GetReason())
				return false, false, false, fmt.Errorf("unexpected reason: DaemonSet Predicates should not return reason %s", reason.GetReason())
			default:
				klog.V(4).Infof("unknown predicate failure reason: %s", reason.GetReason())
				wantToRun, shouldSchedule, shouldContinueRunning = false, false, false
				emitEvent = true
			}
			if emitEvent {
				dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedPlacementReason, "failed to place pod on %q: %s", node.ObjectMeta.Name, reason.GetReason())
			}
		}
	}
	// only emit this event if insufficient resource is the only thing
	// preventing the daemon pod from scheduling
	if shouldSchedule && insufficientResourceErr != nil {
		dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedPlacementReason, "failed to place pod on %q: %s", node.ObjectMeta.Name, insufficientResourceErr.Error())
		shouldSchedule = false
	}
	return
}
  • 如果 shouldSchedule && !exists ,则会把该Pod加入到 nodesNeedingDaemonPods 中。

  • 如果 shouldContinueRunning && pod.DeletionTimestamp == nil && pod.Status.Phase == v1.PodFailed 则检查是否在流控周期(15min, hardcode)中,如果已经超过流控周期,会把该Pod加入到 podsToDelete 中,否则将再次入队列。

  • 如果 shouldContinueRunning && pod.DeletionTimestamp == nil && pod.Status.Phase != v1.PodFailed 则会把该Pod加入到 daemonPodsRunning 中记录着该DamonSet在该Node上正在运行的非Failed的Pods,如果 daemonPodsRunning 不止一个,则需要按照创建时间排序,将不是最早创建的其他所有DaemonSet Pods都加入到 podsToDelete 中。

nodeShouldRunDaemonPod 中调用 simulate 仿真调度返回Pod和Node的匹配结果,根据 algorithm.PredicateFailureReason 结果知道wantToRun,shouldSchedule,shouldContinueRunning的值。下面我们看看simulate中的调度逻辑。

// Predicates checks if a DaemonSet's pod can be scheduled on a node using GeneralPredicates
// and PodToleratesNodeTaints predicate
func Predicates(pod *v1.Pod, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
	var predicateFails []algorithm.PredicateFailureReason

	// If ScheduleDaemonSetPods is enabled, only check nodeSelector, nodeAffinity and toleration/taint match.
	if utilfeature.DefaultFeatureGate.Enabled(features.ScheduleDaemonSetPods) {
		fit, reasons, err := checkNodeFitness(pod, nil, nodeInfo)
		if err != nil {
			return false, predicateFails, err
		}
		if !fit {
			predicateFails = append(predicateFails, reasons...)
		}

		return len(predicateFails) == 0, predicateFails, nil
	}

	critical := kubelettypes.IsCriticalPod(pod)

	fit, reasons, err := predicates.PodToleratesNodeTaints(pod, nil, nodeInfo)
	if err != nil {
		return false, predicateFails, err
	}
	if !fit {
		predicateFails = append(predicateFails, reasons...)
	}
	if critical {
		// If the pod is marked as critical and support for critical pod annotations is enabled,
		// check predicates for critical pods only.
		fit, reasons, err = predicates.EssentialPredicates(pod, nil, nodeInfo)
	} else {
		fit, reasons, err = predicates.GeneralPredicates(pod, nil, nodeInfo)
	}
	if err != nil {
		return false, predicateFails, err
	}
	if !fit {
		predicateFails = append(predicateFails, reasons...)
	}

	return len(predicateFails) == 0, predicateFails, nil
}
  • 如果是启用了 ScheduleDaemonSetPods FeatureGate,则Predicate逻辑如下。这里并没有真正的完成调度,只是做了三个predicate检查,最终的调度还是会交给default scheduler。default scheduler又是如何控制DaemonSet Pod和Node绑定关系的呢,先买个关子。
    • PodFitsHost: 检查Pod.spec.nodeName非空时是否与Node Name匹配;
    • PodMatchNodeSelector: 检查Pod的NodeSelector和NodeAffinity是否与Node匹配;
    • PodToleratesNodeTaints: 检查Pod的NoExecute和NoSchedule类型的Toleration是否与Node Taint匹配。
  • 如果是没启用 ScheduleDaemonSetPods FeatureGate,则Predicate逻辑如下。这里并没有真正的完成调度,只是做了几个predicate检查,最终的调度还是会交给DaemonSet Controller。
    • PodToleratesNodeTaints:检查Pod的NoExecute和NoSchedule类型的Toleration是否与Node Taint匹配。
    • 如果是Critical DaemonSet Pod,则再进行EssentialPredicates,包括:
      • PodFitsHost:检查Pod.spec.nodeName非空时是否与Node Name匹配;
      • PodFitsHostPorts:检查DaemonSet Pods请求的协议&Host端口是否已经被占用;
      • PodMatchNodeSelector: 检查Pod的NodeSelector和NodeAffinity是否与Node匹配;
    • 如果不是Critical DaemonSet Pod,则再进行GeneralPredicates,
      • PodFitsResources :检查Node剩余可分配资源是否能满足Pod请求;
      • PodFitsHost: 检查Pod.spec.nodeName非空时是否与Node Name匹配;
      • PodFitsHostPorts: 检查DaemonSet Pods请求的协议&Host端口是否已经被占用;
      • PodMatchNodeSelector: 检查Pod的NodeSelector和NodeAffinity是否与Node匹配;

Sync Nodes

前面通过podsShouldBeOnNode得到了 nodesNeedingDaemonPods []string, podsToDelete []string, failedPodsObserved int ,接下来就该去创建和删除对应的Pods了。

// syncNodes deletes given pods and creates new daemon set pods on the given nodes
// returns slice with erros if any
func (dsc *DaemonSetsController) syncNodes(ds *apps.DaemonSet, podsToDelete, nodesNeedingDaemonPods []string, hash string) error {
	// We need to set expectations before creating/deleting pods to avoid race conditions.
	dsKey, err := controller.KeyFunc(ds)
	if err != nil {
		return fmt.Errorf("couldn't get key for object %#v: %v", ds, err)
	}

	createDiff := len(nodesNeedingDaemonPods)
	deleteDiff := len(podsToDelete)

	if createDiff > dsc.burstReplicas {
		createDiff = dsc.burstReplicas
	}
	if deleteDiff > dsc.burstReplicas {
		deleteDiff = dsc.burstReplicas
	}

	dsc.expectations.SetExpectations(dsKey, createDiff, deleteDiff)

	// error channel to communicate back failures.  make the buffer big enough to avoid any blocking
	errCh := make(chan error, createDiff+deleteDiff)

	klog.V(4).Infof("Nodes needing daemon pods for daemon set %s: %+v, creating %d", ds.Name, nodesNeedingDaemonPods, createDiff)
	createWait := sync.WaitGroup{}
	// If the returned error is not nil we have a parse error.
	// The controller handles this via the hash.
	generation, err := util.GetTemplateGeneration(ds)
	if err != nil {
		generation = nil
	}
	template := util.CreatePodTemplate(ds.Namespace, ds.Spec.Template, generation, hash)
	// Batch the pod creates. Batch sizes start at SlowStartInitialBatchSize
	// and double with each successful iteration in a kind of "slow start".
	// This handles attempts to start large numbers of pods that would
	// likely all fail with the same error. For example a project with a
	// low quota that attempts to create a large number of pods will be
	// prevented from spamming the API service with the pod create requests
	// after one of its pods fails.  Conveniently, this also prevents the
	// event spam that those failures would generate.
	batchSize := integer.IntMin(createDiff, controller.SlowStartInitialBatchSize)
	for pos := 0; createDiff > pos; batchSize, pos = integer.IntMin(2*batchSize, createDiff-(pos+batchSize)), pos+batchSize {
		errorCount := len(errCh)
		createWait.Add(batchSize)
		for i := pos; i < pos+batchSize; i++ {
			go func(ix int) {
				defer createWait.Done()
				var err error

				podTemplate := &template

				if utilfeature.DefaultFeatureGate.Enabled(features.ScheduleDaemonSetPods) {
					podTemplate = template.DeepCopy()
					// The pod's NodeAffinity will be updated to make sure the Pod is bound
					// to the target node by default scheduler. It is safe to do so because there
					// should be no conflicting node affinity with the target node.
					podTemplate.Spec.Affinity = util.ReplaceDaemonSetPodNodeNameNodeAffinity(
						podTemplate.Spec.Affinity, nodesNeedingDaemonPods[ix])

					err = dsc.podControl.CreatePodsWithControllerRef(ds.Namespace, podTemplate,
						ds, metav1.NewControllerRef(ds, controllerKind))
				} else {
					err = dsc.podControl.CreatePodsOnNode(nodesNeedingDaemonPods[ix], ds.Namespace, podTemplate,
						ds, metav1.NewControllerRef(ds, controllerKind))
				}

				if err != nil && errors.IsTimeout(err) {
					// Pod is created but its initialization has timed out.
					// If the initialization is successful eventually, the
					// controller will observe the creation via the informer.
					// If the initialization fails, or if the pod keeps
					// uninitialized for a long time, the informer will not
					// receive any update, and the controller will create a new
					// pod when the expectation expires.
					return
				}
				if err != nil {
					klog.V(2).Infof("Failed creation, decrementing expectations for set %q/%q", ds.Namespace, ds.Name)
					dsc.expectations.CreationObserved(dsKey)
					errCh <- err
					utilruntime.HandleError(err)
				}
			}(i)
		}
		createWait.Wait()
		// any skipped pods that we never attempted to start shouldn't be expected.
		skippedPods := createDiff - batchSize
		if errorCount < len(errCh) && skippedPods > 0 {
			klog.V(2).Infof("Slow-start failure. Skipping creation of %d pods, decrementing expectations for set %q/%q", skippedPods, ds.Namespace, ds.Name)
			for i := 0; i < skippedPods; i++ {
				dsc.expectations.CreationObserved(dsKey)
			}
			// The skipped pods will be retried later. The next controller resync will
			// retry the slow start process.
			break
		}
	}

	klog.V(4).Infof("Pods to delete for daemon set %s: %+v, deleting %d", ds.Name, podsToDelete, deleteDiff)
	deleteWait := sync.WaitGroup{}
	deleteWait.Add(deleteDiff)
	for i := 0; i < deleteDiff; i++ {
		go func(ix int) {
			defer deleteWait.Done()
			if err := dsc.podControl.DeletePod(ds.Namespace, podsToDelete[ix], ds); err != nil {
				klog.V(2).Infof("Failed deletion, decrementing expectations for set %q/%q", ds.Namespace, ds.Name)
				dsc.expectations.DeletionObserved(dsKey)
				errCh <- err
				utilruntime.HandleError(err)
			}
		}(i)
	}
	deleteWait.Wait()

	// collect errors if any for proper reporting/retry logic in the controller
	errors := []error{}
	close(errCh)
	for err := range errCh {
		errors = append(errors, err)
	}
	return utilerrors.NewAggregate(errors)
}
  • 每次删除和创建的最大Pods个数分别为250个。
  • 根据DaemonSet Object构建Pod Template,并且增加/更新以下Tolerations:
    • node.kubernetes.io/not-ready | exist | NoExecute
    • node.kubernetes.io/unreachable | exist | NoExecute
    • node.kubernetes.io/disk-pressure | exist | NoSchedule
    • node.kubernetes.io/memory-pressure | exist | NoSchedule
    • node.kubernetes.io/unschedulable | exist | NoSchedule
    • node.kubernetes.io/network-unavailable | exist | NoSchedule
    • 如果是Critical Pod,还会增加以下Tolerations:
      node.kubernetes.io/out-of-disk
      node.kubernetes.io/out-of-disk
      
  • 给Pod加上Label: controller-revision-hash=$DaemonSetControlelrHash
  • 分批的创建DaemonSet Pods(按照1,2,4,8,...的batch size去Create DaemonSet Pods,防止大批量的一次性创建所有DaemonSet Pods时因同样的错误导致失败。对于创建失败的Pods,注意更新expectations中的Adds值,每失败一个就会将expectations.adds值减1。
    • 如果启用了ScheduleDaemonSetPods FeatureGate,则往Pod Tempalete中添加/更新 metadata.name=$NodeName 的NodeAffinity。通过这种方式,来实现通过default scheduler来调度DaemonSet Pods的目的。
  • 一次性的删除podsToDelete的Pods。

DaemonSet的滚动更新

DaemonSet的滚动更新,跟Deployment的滚动更新略有不同,DaemonSet RollingUpdate只有MaxUnavailable这一个配置项,没有MinAvailable。

// rollingUpdate deletes old daemon set pods making sure that no more than
// ds.Spec.UpdateStrategy.RollingUpdate.MaxUnavailable pods are unavailable
func (dsc *DaemonSetsController) rollingrollingrollingUpdate(ds *apps.DaemonSet, hash string) error {
	nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ds)
	if err != nil {
		return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err)
	}

	_, oldPods := dsc.getAllDaemonSetPods(ds, nodeToDaemonPods, hash)
	maxUnavailable, numUnavailable, err := dsc.getUnavailableNumbers(ds, nodeToDaemonPods)
	if err != nil {
		return fmt.Errorf("Couldn't get unavailable numbers: %v", err)
	}
	oldAvailablePods, oldUnavailablePods := util.SplitByAvailablePods(ds.Spec.MinReadySeconds, oldPods)

	// for oldPods delete all not running pods
	var oldPodsToDelete []string
	klog.V(4).Infof("Marking all unavailable old pods for deletion")
	for _, pod := range oldUnavailablePods {
		// Skip terminating pods. We won't delete them again
		if pod.DeletionTimestamp != nil {
			continue
		}
		klog.V(4).Infof("Marking pod %s/%s for deletion", ds.Name, pod.Name)
		oldPodsToDelete = append(oldPodsToDelete, pod.Name)
	}

	klog.V(4).Infof("Marking old pods for deletion")
	for _, pod := range oldAvailablePods {
		if numUnavailable >= maxUnavailable {
			klog.V(4).Infof("Number of unavailable DaemonSet pods: %d, is equal to or exceeds allowed maximum: %d", numUnavailable, maxUnavailable)
			break
		}
		klog.V(4).Infof("Marking pod %s/%s for deletion", ds.Name, pod.Name)
		oldPodsToDelete = append(oldPodsToDelete, pod.Name)
		numUnavailable++
	}
	return dsc.syncNodes(ds, oldPodsToDelete, []string{}, hash)
}
!available

Node更新

Node Add事件很简单,遍历所有DaemonSets对象,调用nodeShouldRunDaemonPod计算出每个DaemonSet是否应该在该Node上启动。如果要启动,则把DaemonSet加入到Queue,由syncDaemonSet进行处理。

对于Node Update事件,需要判断Update的字段等,然后根据情况决定是否要加入到Queue进行syncDaemonSet。

func (dsc *DaemonSetsController) updateNode(old, cur interface{}) {
	oldNode := old.(*v1.Node)
	curNode := cur.(*v1.Node)
	if shouldIgnoreNodeUpdate(*oldNode, *curNode) {
		return
	}

	dsList, err := dsc.dsLister.List(labels.Everything())
	if err != nil {
		klog.V(4).Infof("Error listing daemon sets: %v", err)
		return
	}
	// TODO: it'd be nice to pass a hint with these enqueues, so that each ds would only examine the added node (unless it has other work to do, too).
	for _, ds := range dsList {
		_, oldShouldSchedule, oldShouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(oldNode, ds)
		if err != nil {
			continue
		}
		_, currentShouldSchedule, currentShouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(curNode, ds)
		if err != nil {
			continue
		}
		if (oldShouldSchedule != currentShouldSchedule) || (oldShouldContinueRunning != currentShouldContinueRunning) {
			dsc.enqueueDaemonSet(ds)
		}
	}
}
  • 如果Node Condition没有发生变更,则不能忽略该Node变更事件。
  • 除了Node Condition和ResourceVersion之外,如果新旧Node对象不一致,也不能忽略该变更事件。
  • 对于不能忽略的变更,则分别对于oldNode,currentNode调用nodeShouldRunDaemonPod计算ShouldSchedule、ShouldContinueRunning是否一致,只要ShouldSchedule或者ShouldContinueRunning发生变更,则将该DaemonSet Object入队列进入syncDaemonSet进行处理。

DaemonSet Controller主体逻辑

深入分析Kubernetes DaemonSet Controller 原 荐

总结

本文主要对DaemonSet的结构、创建、同步、调度、滚动更新几个方面进行了源码分析,在生产环境中使用DaemonSet进行大规模部署使用之前,加深这些了解是有帮助的。下一篇博客,我将会从一些实际问题出发,从用户角度分析DaemonSet的若干行为。比如,Node Taint变更后DaemonSet的行为、DaemonSet删除时异常导致Hang住的原因及解决办法、Node NotReady时DamonSet Pods会怎么样等思考。


以上所述就是小编给大家介绍的《深入分析Kubernetes DaemonSet Controller 原 荐》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Computational Geometry

Computational Geometry

Mark de Berg、Otfried Cheong、Marc van Kreveld、Mark Overmars / Springer / 2008-4-16 / USD 49.95

This well-accepted introduction to computational geometry is a textbook for high-level undergraduate and low-level graduate courses. The focus is on algorithms and hence the book is well suited for st......一起来看看 《Computational Geometry》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换