Canarys | IT Services

Blogs

NODE AFFINITY

Share

2

The k8s node affinity feature is to ensure pods are hosted on a particular node. As mentioned in previous blog large data processing nodes are into node-1. we achieved that using node selectors. As I mentioned that we cannot provide advance expressions like large or medium / not in small nodes.

Node affinity feature provides us advanced capability of placing the pods into particular node, since greater features comes with higher complexities. So, simple node selector feature looks like this

NODESELECTOR:

spec:                     
      containers:
      - name: data-processor
        image: data-processor  
        ports:
        - containerPort: 80
        imagePullPolicy: Always
     nodeSelector:
        size:  Large   

node affinity YAML looks like this,

NODEAFFINITY:

spec:
 affinity:
  nodeAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: size
        operator: In
        values:
        - Large

But there is no functionality difference between them, both will do same work. Let’s understand it first, if you could think pod can be placed in large or medium nodes YAML will be like this

 

spec:
 affinity:
  nodeAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: size
        operator: In
        values:
        - Large		
          Medium

Here we are able to add values to list of values like Large as well Medium

Let’s see another case as well, you could use not in operator as mentioned in YAML, when node affinity rules matches nodes which are not small.

 

spec:
 affinity:
  nodeAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: size
        operator: Not In
        values:
        - Small

We have set labels to Large and Medium. No small node has been labeled, well in that case, we can use exists operator also, which gives same result, and you don’t need values section for that, there are other number of operators as well, look Kubernetes.io for more detail info on that, in this case as per node affinity rules scheduler doesn’t look for values as operator value is exists which means pods will be scheduled on to Large and Medium labeled nodes, because those are the only nodes which has got key as size, so it won’t look for additional values property

 

spec:
 affinity:
  nodeAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: size
        operator: Exists

now we understand all of them and we are comfortable with creating a pod with specific node affinity rules, when pods are created, then node affinity rules are considered and scheduled into respective nodes.

Let’s dive little bit deep now;

What if node affinity could not match nodes with given expressions, Well as per our case, what if there are no nodes matching the label size and what if someone changed the labels to nodes where pods are already scheduled, will pods continue to stay on that node only?

All of them can be solved and answered by long Sentence like property under node Affinity. It defines the type of node affinity. Type of node affinity defines the way scheduler behaves as per node affinity rules of a pod.

There are two types of available node affinity types

There are two types of available node affinity types

  1. requiredDuringSchedulingIgnoredDuringExecution
  2. preferredDuringSchedulingIgnoredDuringExecution

when considering the node affinity there are two stages in the life cycle of pod.

During Scheduling: it is a state where a pod does not exist and created for the 1st time. If rules are available it will be scheduled into a particular node, we are clear about this. What if matching label are not available, for example, we forgot to label nodes. That’s where we got type1 and type 2 

 type 1 node affinity of requiredDuringScheduling, if you select required type, the scheduler will mandate that pod is placed on node with given node affinity rules, if cannot find one, it cannot schedule. This type is used where the placement of pod is crucial, if matching node does not exist, it will not schedule.

Type 2 node affinity of preferredDuringScheduling : it helps when the pod placement is less importance than running a workloads, in this type we choose preferred type, here scheduler will ignore node affinity rules and places the pod any available nodes.

DuringExecution:

When pod has been running and a change is made in the environment that effects node Affinity, such changes like labels of node, for example Admin has removed the label Size=Large, what happens to pods that are running on that node. In this case again we got type 1 and type 2 both are at ignored phase in present node affinity types

Type1 and type 2: that means pods will continue to run on the nodes. Any changes in node affinity rules will not impact them once they are scheduled.

There are two planned node affinity types

  1. requiredDuringSchedulingRequiredDuringExecution
  2. preferredDuringSchedulingRequiredDuirngExecution.

In this types only change comes in execution part phase, required is added that means whenever there is change, pod will be evicted or terminated.

 

DuringSCheduling

DuringExecution

Type 1

Required

ignored

Type 2

Preferred

ignored

Type 3

Required

required

Type 4

Preferred

required

 

*it will understand easily if you follow and relate with colors.

Leave a Reply

Your email address will not be published. Required fields are marked *

Reach Us

With Canarys,
Let’s Plan. Grow. Strive. Succeed.