Originally published in posts on Medium, and Dev.to The InitContainer resource in Kubernetes can be used to run tasks before the rest of a pod is deployed. A common use-case is to pre-populate config files (one notable example is for providing a custom metrics endpoint), but basically can be used to complete any kind of container-based task before the rest of the pod containers are brought online. Let’s take the example of a pod like this:
Part of operating Kubernetes in production, much like any other orchestration solution, is figuring out how it fits into things like your organization’s use of configuration management, automation tooling, CI/CD pipelines, etc. and a common one is how it can work to your advantage with a tool like Terraform. What I’m talking about here is less of a Kubernetes operations example, but more of an approach to executing a common administration task in Kubernetes using popular DevOps tools like Terraform, and the Kubeadm cluster manager tools.
The iperf tool has been used to benchmark network performance, using a variety of options for conducting throughput tests from a server and client machine. This just requires iperf being started in server mode: iperf -s on one machine, and using client options on another: iperf -c iperf-server -P 10 for example, the above connects to your server, and conducts a standard test on throughput, using the -P argument to define the number of parallel connections (client threads) over which to test.
One of the common resources in Kubernetes for exposing ports, aiding in named-based service discovery, and creating load balancer objects in Kubernetes is the Service resource, which can be defined using a manifest like: kind: Service apiVersion: v1 metadata: name: web-service spec: selector: app: web ports: - protocol: TCP port: 80 targetPort: 3000 to create a service, web-service.namespace, resolvable within the cluster, and if you add: type: LoadBalancer if exposes <External IP>:<Port> to the network outside the cluster.
Stateful workloads, such as those that rely on a database, are highly facilitated in Kubernetes; like Deployment resources, there are reconciliation loops that wrap Pods for services like Postgres and MongoDB, that are commonly clustered, and require consistent, ordinal naming, and high-levels of automation for provisioning and locating volumes for your persistent data in these volumes. An example I’d like to share is based on the StatefulSet resource, which like a Deployment, runs a given number of replicas, has a high level of control over resource consumption, and full access to the interfaces for storage, but also provides consistent naming for these replicas, so as in MongoDB ReplicaSets, you can anticipate the name of the adjacent replica node, and it makes spinning up a cluster of a known size easy to define in however your manifests are generated and stored (i.
One of the finer controls in Kubernetes is over container lifecycle; consider that if a Pod is your basic deployment unit, and replication controllers like Deployments and DaemonSets are just reconciliation loops that track the desired number of replicas against the running number of replicas, why shouldn’t you be able to schedule a single-use, or recurring one-off executions of a script in a Pod as well? Well, that’s exactly what the Job resource is for in Kubernetes!
Kubernetes offers the ability to encrypt Secrets at-rest, and it’s one of many features one can enable by modifying the kube-apiserver binary’s start-up flags, for example: --experimental-encryption-provider-config=/etc/kubernetes/secrets.conf and this goes into the /etc/kubernetes/manifests/kube-apiserver.yaml file if you’re a Kubeadm user. Easy enough, right? So, you just need the secrets.conf config, which just contains your encryption key (which you’ll rotate periodically, and I’ll cover that in a moment), and the provider for secrets encryption (these are built-in, but there are options for if you use a KMS, or your cloud provider offers an option that can be integrated with the Secrets API for this purpose–AWS Secrets Manager, for example, if you’re an EKS user), so you need a file like this: