Maybe not best practices. But some practices. Some mentions of the worst practices.
This is a small collection of things that worked for me, my colleagues and most people I talked to, as well as some things that do not work, at least not beyond a single-dev-play-around-installation or your typical hello world kubernetes example.
I do not think there is much need for explanation of the rationale of any of the points, but I guess you know how to contact me if I err in this regard. Most of these things are rather evident when you start to think about it, but of course in the wild you only start thinking about such things once you run into the issues.
Run a pocket knife deployment
This is a neat and easy way to add a little bit more "feeling like home" for your sys admin and giving you a way to inspect and understand your cluster from the inside. Create a container that contains all your favorite tools, such as telnet, ping, netcat, mysql client, redis client, htop, curl, route, qperf, traceroute, bonnie++ and whatever your heart needs. Run it as a deployment in your cluster, possibly with some volume attached. You can then always use it to debug some services, try to connect to things, check if your volumes work as intended and see if you can break things from within a pod.
When moving to kubernetes, you will not be able to run production straight from CI/CD replicated into different clusters across multiple sites.
Get a cluster running, build the first templates for your application, add some monitoring and then iterate from that point up.
Often you really need less than you think to benefit from kubernetes, e.g. when you can move only part of your workload, like webservers and cache hosts, to the kubernetes cluster and reflect your old database servers via endpoints in the cluster.
Prioritize and persist
Prioritizing changes is important.
Often you will find yourself in a position, where you could make an improvement to the cluster or deployment mechanism, but it would introduce changes your team cannot keep up with.
Some changes will be inevitable, but sometimes you can solve the problem on a higher level, removing interactions and thus the need of people to learn new tools. This in turn often comes with the cost of you digging into something a bit longer and deeper than you initially wanted.
Sometimes the maintenance pressure is too high, so that you have to deploy intermediate steps on the road to full automatization and get back into this sacred realm of "SRE at most 50% of their time doing manual maintenance".
Helm allows you to template your YAML files. Do not restrict yourself to building full blown apps. Sometimes you just have a few yaml files that need to be templated, e.g. because you want to deploy the same thing across several namespaces. Using helm for this might feel like overkill, but really, it isn't. Even if all you would do is
for namespace in a b c d; do kubectl apply -n $namespace -f my.yml; done
create a small helm chart for this.
Sometimes you do not want to use
helm template . | kubectl apply -f -.
It is just much cleaner, easier to maintain and uniform with respect to the
rest of your deployments.
There are tools which might have similar usecases as helm, most notably kustomize, but helm is the most powerful tool for this job and there are enough situations where you can not do without helm.
Better keep the toolchain slim with only
kubectl and with some
practice you will notice that
kustomize and similar tools actually are not
really more lightweight, but only less powerful.
The biggest problem with creating helm charts, for beginners, is that
helm create does create a lot of resources which might or might not be
irritating - but in the end you often can do with a single or a few templated
.yml in the
templates/ directory of your helm chart.
Do not do the docker-build-config-from-env spaghetti image build
This is really a big mistake and a relict from a time where docker didn't have compose and k8s was not widespread. If you want to see how not do it, look at anything from bitnami. Try to figure out, how certain configs are created. I dare you.
In this anti-pattern you put scripts, which create your configs from passed environment variables, into the container. Often across multi-stage container builds, where you reference the script from a base image three layers below.
Don't do that. Instead do this: Just put files into ConfigMaps or Secrets and mount them at the appropriate places. It works like a charm. As if ConfigMaps were made for exactly this usecase. With helm or similar tools you can even template them. Often using exactly the same values which are otherwise passed as env variables. Your chart and container just are much more maintainable and easier to understand.
Even docker compose is capable of that, so there really is no reason anymore to build things this old spaghetti image, unless you actually want to obfuscate your stuff. 1
Always create resources from helm or YAML files
kubectl create or
Those tools are the
goto of the future.
It works with your first fun container on some play around cluster, but is
just a weird way to break abstraction layers and is not really maintainable.
Most people struggling with moving things around or making deployments
reproducible did use
kubectl create or
kubectl expose to create things.
Create YAML files and apply these. Keep them versioned in git or your favorite open source distributed version control. And if you did use aforementioned commands, do not hesistate to make things right. Sooner or later you have to do it right anyway.
KISS and adapt the unix philosophy
Everything is a resource. Build and use components that do one thing and do it well. Build and use components that work well with another. Write programs to interact via good APIs. Keep state and logic separate.
Which I actually accuse some of these companies of, due to the usual conflict of interest, when your main money source is selling support, you do not tend to build stuff that runs without many problems and arcane incantations. ↩