Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Pods distribution across nodes

I am having a question regarding the distribution of pods across nodes. What I like to have is given a 3 node cluster, I want to deploy a pod with 2 replicas while making sure that these replicas are deployed on different nodes in order to achieve high availability. What are the options other than using nodeAffinity?

>Solution :

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

First of all, node affinity allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node. Therefore it does not guarantee that each replica will be deployed on a different node or that these nodes will be distributed uniformly across the all of the nodes. However a valid solution will be using Pod Topology Spread Constraints.

Pod Topology Spread Constraints rely on node labels to identify the topology domain(s) that each Node is in, and then using these labels to match with the pods having the same labels. You can define one or multiple topologySpreadConstraint to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster.

kind: Pod
apiVersion: v1
metadata:
  name: mypod
  labels:
    node: node1
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        node: node1
  containers:
  - name: myapp
    image: image_name

maxSkew: 1 describes the degree to which Pods may be unevenly distributed. It must be greater than zero. Its semantics differs according to the value of whenUnsatisfiable.

topologyKey: zone implies the even distribution will only be applied to the nodes which have label pair "zone:" present.

whenUnsatisfiable: DoNotSchedule tells the scheduler to let it stay pending if the incoming Pod can’t satisfy the constraint.

labelSelector is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain.

You can find out more on Pod Topology Spread Constraints in this documentation: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading