Managing your application with Kubernetes¶
To manage applications with Kubernetes we use the
apply command. This command requires a file or directory of files. When run, the apply command makes the state of the Kubernetes cluster match the state defined in the file/s.
Using the Kubernetes CLI, (Kubectl), we can create objects such as Pods, Deployments. etc. by providing a yaml file for that object.
A Pod represents a single instance of an app running in the cluster.
Here’s an example of a yaml file that defines a pod named, simple-pod.yaml The kind field describes the type of object you want to create. The pod spec must contain at least one container. Image specifies which image will be run in the pod. Finally, we list the port to expose from the container.
apiVersion: v1 kind: Pod metadata: name: nginx-deployment label: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
To create the pod defined in the yaml file above, run the following command.
kubectl apply -f simple-pod.yaml
kubectl run nginx-deployment --image=nginx --port=80
ReplicaSet adds or deletes pods as needed. Creating replicas of a pod scales an application horizontally. Replicas are usually created as part of a deployment. Here’s a sample yaml file for creating 2 replicas, to create a ReplicaSet without a deployment.
apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset spec: replicas: 2 selector: matchLabels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: -containerPort: 80
kubectl scale --replicas=2 deployment nginx-deployment
Deployment is an object that can provide updates to both pods and ReplicaSets. Deployment object allows you to do rolling updates of a pod, ReplicaSet object does not. A rolling update scales up the new version to the appropriate number of replicas and scales down the old version to zero.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: -containerPort: 80
kubectl create deployment nginx-deployment --image=nginx
Service enables network access from either within the cluster or between external processes.
kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx
kubectl create service nodeport nginx-deployment --tcp=80:80
A Horizontal Pod Autoscaler (HPA) allows you to scale up or down depending on traffic. This can be configured by specifying the CPU or memory states. The master node periodically checks to see if the desired state is met and scales up or down as needed.
One way to do this is to enable autoscaling in a yaml file:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: default spec: maxReplicas: 10 minReplicas: 5 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nginx-deployment targetCPUUtilizationPercentage: 10 ... ...
kubectl autoscale deploy nginx-deployment --min=5 --max=10 --cpu-percent=50