What is Kubernetes?

Production-Grade Container Orchestration. Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

K8s makes it easy to automate and declaratively configure. It has a large and rapidly growing ecosystem.

When to use Docker or K8s?

Docker is used to isolate your containerized application while K8s is a container scheduler/orchestration tool and is used to deploy and scale your application by managing multiple containers deployed on multiple host machines.

What types of containers can K8s run?

The container runtime is the software responsible for running the containers. K8s supports several of them: Docker, containerd, cri-o, rktlet, and any implementation of the Kubernetes Container Runtime Interface, or K8s CRI.

Who created K8s?

K8s was founded by Joe Beda, Brendan Burns, and Craig McLuckie, quickly joined by other Google engineers including Brian Grant and Tim Hockin, and Google announced it in mid 2014.

Where to use K8s?

You can run K8s anywhere: on dedicated servers (bare metal), on virtual machines, on public cloud providers, on private clouds, or in hybrid cloud environments. One of its main advantages is that it works on various types of infrastructure

Most important features of K8s

Among its most important characteristics we can highlight the following:

  • Vertical scaling. Depending on the CPU usage that we make, K8s allows us to vertically scale our applications automatically (autoscaling) or manually (command).
  • Automatic repair. In case of a failure in a container we can automatically restart, as well as replace or reschedule it when a node dies. It also offers us the possibility of repairing those health checks defined by the user that do not respond.
  • Planning of nodes and containers. K8s helps us decide which node each container will run on, based on required resources and other constraints. In addition, we can mix critical and best-effort workloads in order to maximize resource savings.
  • Automatic deployments and rollbacks. We can progressively deploy the changes when we update an application or change its configuration, and thus be able to perform an automatic rollback in case of failure in any of the instances.
  • Container Orchestrator. K8s allows you to automatically mount the necessary storage system for the containers: locally, in a public cloud provider, or in a network system such as NFS, Flocker or Gluster.
  • Persistent storage. K8s is supported by platforms like Amazon Web Services or Google Cloud Platform, and providers (Red Hat, Dell EMC, NetApp, etc) provide persistent storage.
  • Service discovery. With K8s we assign containers their own IP addresses and a specific DNS name for each set of containers. Thus, it is not necessary to use external resources for service discovery.
  • Security. All our sensitive information, such as passwords or ssh keys, we can safely store in secrets. In this sense, K8s does not expose our confidential information when deploying and configuring our applications.
  • Large and heterogeneous clusters. K8s can be deployed in very large clusters, including Docker containers. On the other hand, it allows us to create a cluster as a result of combining different virtual machines or local servers.