In this article, we’ll briefly focus on services and blue-green deployment strategy.
What is a service in Kubernetes?
A service is responsible for making our pods discoverable inside the network or exposing them to the internet. A service identifies pods by its LabelSelector.
What are the types of services?
- Cluster IP
- Node Port
- Load Balancer
These services will be running inside the cluster as pods. Let’s understand how to run these services as pods and how to map these pods to our services.
What is Cluster IP Service?
With this exposes the service on cluster-internal IP. Service is only reachable from within the Cluster. This is the default service.
To run this service as pod we need a YAML file. The YAML for cluster IP is given below.
#service ClusterIP apiVersion: v1 kind: service metadata: name: my-clusterip spec: selector: app: nginx type: ClusterIP ports: -name: http port: 80 targetPort: 80 protocol: TCP
In this YAML, we’ll use app: nginx as selector which is the same label that was used it in the previous blog to create the pods. This same label is used to map the pods with the ClusterIP service and expose them internally.
What is a NodePort?
The NodePort exposes the service on each node’s IP at a static port. A cluster IP service is automatically created to which the nodeport service will route which means, the NodePort service is built on top on cluster IP service. So now, we’ll be able to contact the Nodeport service from outside the cluster, by using NodeIP:NodePort. The default range of NodePort will be 30000 – 32767, if no port is assigned.
#NodePort Service
apiVersion: v1 kind: service metadata: name: my-Nodeport spec: selector: app: nginx type: NodePort ports: -name: http port: 80 targetPort: 80 nodePort: 30002 protocol: TCP
In this YAML, we’ll use app: nginx label which is used to map the pods to the NodePort service and expose them internally.
What is Load Balancer service?
A Load Balancer exposes the service externally using a cloud provider’s load balancer. The NodePort and Cluster IP services, to which the external Load balancer will route, are automatically created.
If you are using a custom Kubernetes cluster (MiniKube, Kubeadm, etc), then in such cases, there is no Load Balancer integrated unlike Azure or any cloud. With this default set up, only NodePort is used. The load balancer must be configured externally.
# service Load Balancer
apiVersion: v1 kind: service metadata: name: my-nginx-LB spec: selector: app: nginx type: LoadBalancer ports: -name: http port: 80 targetPort: 80 protocol: TCP
Note: Always remember container port and target port should be same.
Blue-Green Deployments
In a blue/green deployment strategy (sometimes referred to as red/black), the old version of the application (green) and the new version (blue) get deployed at the same time. When both are deployed, users only have access to the green; whereas, the blue is available to the QA team for test automation on a separation service or via direct port-forwarding.
Once the new version has been tested and is signed off for release, the service is switched to the blue version with the old green version being scaled down:
apiVersion: v1 kind: service metadata: name: bg-deployment spec: selector: app: nginx version: “02”
The other deployment types are as follows:
- Canary deployment
- A/B testing
- Ramped deployment also called as rolling deployment à Kubernetes default rollout method
The most widely used Deployment strategy is Rolling deployments. Other deployments are not much widely followed. For more information about other deployments, visit the link; https://azure.microsoft.com/en-in/overview/kubernetes-deployment-strategy/