“Let's use Kubernetes!” You now have 8 problems

If you use Docker, the next logical step seems to be switching to Kubernetes, it's K8s, right? Well, suppose. However, solutions designed for 500 software engineers simultaneously developing a single application are quite different from solutions for 50 people. And the solution for a team of 5 programmers is another story altogether.

If you work in a small team, Kubernetes is most likely not for you. It will bring you a lot of pain in exchange for extremely modest benefits.
Let's see why this can happen.

Everyone likes the “moving parts”


Kubernetes is moving and changing a lot: concepts, subsystems, processes, machines, code ... And all this means a lot of difficulties.

Several cars


Kubernetes is a distributed system: it has a host machine that controls the rest, workers. Each machine performs work in containers.
So, we are already talking about at least two physical or virtual machines, which are necessary only to make it work. But in fact you get ... just one car. If you are going to scale (that's where the dog is buried!), You will need three, four, or maybe as many as seventeen virtual machines.

Very, very much code


The Kubernetes code base at the beginning of March 2020 includes more than 580,000 lines of code on Go. And this is just clean code, excluding comments, blank lines, and vendor packages. The 2019 security review describes the code base as follows:
“The Kubernetes code base has significant room for improvement. It is large and complex, contains large sections of code with minimal documentation and a huge number of dependencies, including systems that are not part of Kubernetes. Also in the code base there are many cases of re-implementation of logic that could be centralized in supporting libraries, which would reduce complexity, simplify corrections and reduce document load in various areas of the code base. ”
To be honest, the same can be said about many other large projects, but all this code should function correctly if you do not want your application to fail.

Architectural, operational, configuration, and conceptual complexity


Kubernetes is a comprehensive system with many different services, systems and more.
Before you can launch your only application, you will have to provide the following, greatly simplified architecture (the original image is taken from the Kubernetes documentation):



The K8s concept documentation includes many purely “educational” things, such as the following snippet:
In Kubernetes, an EndpointSlice contains references to a set of network endpoints. The EndpointSlice controller automatically creates EndpointSlices for a Kubernetes Service when a selector is specified. These EndpointSlices will include references to any Pods that match the Service selector. EndpointSlices group network endpoints together by unique Service and Port combinations.
By default, EndpointSlices managed by the EndpointSlice controller will have no more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1 with Endpoints and Services and have similar performance.


Actually, I understand what this is about, but pay attention to how many concepts you need to learn: EndpointSlice, Service, selector, Pod, Endpoint.

And - yes, most of the time you will not use all these features. Therefore, most of the time you will not need Kubernetes at all.

Here is another random snippet:
By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route “external” traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies - such as routing zonally - have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.

Here's what Security review says about this:
Kubernetes — , . , Kubernetes — , , , .


The more you get at Kubernetes, the harder the normal development process becomes: you need all these concepts (Pod, Deployment, Service, etc.) just to make your code work. Thus, you need to promote a full-fledged K8s system, even just for testing through a virtual machine or nested Docker containers.

And, since your application becomes more difficult to run locally, the development is complicated by many options for solving this problem, from a bench environment to proxying a local process to a cluster (I wrote this tool a couple of years ago ) and proxying a remote process to a local machine ...

You can choose any option, but none of them will be perfect. The easiest way to not use Kubernetes at all.

Microservices (this is a bad idea)


A secondary problem is that since your system allows you to run many services, you have to write these many services. Bad idea.
Distributed application is hard to write quality. In fact , the more moving parts, the more these problems interfere with the work.

Distributed applications are hard to debug. You will need a completely new type of debugging and logging tools, which will still give you less than the logs of a monolithic application.

Microservices are one of the scaling techniques in organizations: when you have 500 developers who serve one productive website, it makes sense to come to terms with the cost of a large-scale distributed system if it allows development teams to work independently. Thus, each team of 5 people receives a single microservice, and pretends that all other microservices are external services that you should not trust.

If your whole team consists of 5 people, you have 20 microservices and force majeure circumstances do not force you to create a distributed system, then somewhere you miscalculated. Instead of 5 people per 1 microservice, as in large companies, you get 0.25 people.

Isn't Kubernetes useful at all?

Scaling


Kubernetes might come in handy if you need serious scalability. However, let's see what alternatives you have:

  • You can purchase VMs in the cloud, and they will have up to 416 virtual CPUs and 8 TB of RAM, that is, completely unflattering power. It will cost you a pretty penny, but it will be extremely simple to do.
  • Many simple web applications can be scaled in a fairly easy way with services such as Heroku.

It is assumed, of course, that an increase in the number of working VMs will also play into your hands:

  • Most applications do not require significant scaling, they will have enough high-quality optimization.
  • The bottleneck for scaling most web applications is databases, not web workers

Reliability


The more moving parts, the greater the potential for an error to occur.
Kubernetes capabilities, sharpened by increased reliability (health checks, rolling deploys), in many cases can be already built-in or implemented much easier. For example, nginx can do health checks on worker processes, and you can also use docker-autoheal or something similar to automatically restart these processes.

If you are particularly concerned about downtime, the first thought that will visit you should not be “how do I reduce downtime when deploying from 1 second to 1 millisecond), but it should be“ how do I make sure that changes to the database schema allow rollback if I somewhere somewhere? ”
And if you need reliable web workers without a single machine as a point of failure, there are many ways to implement this without Kubernetes.

Best practics?


In fact, no Best Practices exist in nature. There are only best practices for each specific situation. Therefore, if something is in trend and is popular, this does not mean at all that it is the right choice for you specifically.
In some cases, Kubernetes is the best option. The rest is a waste of time.

Unless you feel the urgent need for all these complexities, you have a wide selection of tools that will solve your problems: Docker Compose for one machine, Hashicorp's Nomad for orchestration, Heroku and similar systems for scaling, and something like Shakemake for computing pipelines.

Afterword from the Editor


As a cloud provider, we regularly encounter a wide variety of clients, from small startups to large organizations with complex business processes and related infrastructure requests. Regardless of how good the technology is, there will always be precedents in which its application will result in unnecessary difficulties. You should always proceed from the situation and carefully weigh the pros and cons of the available options. The author of the article has somewhat thickened his colors, but his message is quite clear: sometimes, in order to make the right decision, it is worth turning away from trends and unbiasedly evaluate your project. This will save the developers strength, and the use of company resources will make it more appropriate.

All Articles