‘Run K8s’—Do you remember when those T-shirts first appeared?

They cleverly appropriated Run DMC’s retro hip-hop cool and tied that neatly with Kubernetes, a new way to deploy and manage production apps that Docker had helpfully shown us how to containerize.

Jumping forward a few years, and now containers are a commodity.

The bastion of open-source cloud-native, the CNCF, declared containers the ‘new normal’ in its last annual survey, highlighting that 64% of end users use Kubernetes in production.

Today, K8s has become the most popular foundational infrastructure for running and orchestrating containers, and its extensibility has made it possible to go on and implement observability tools, security controls, service meshes, and many more advanced technologies using the platform.

But as much as Kubernetes (K8s) has gained in popularity, the challenges of Kubernetes and running it in production remain.

Two developers talking at KubeCon in 2018, one is wearing a T-shirt with the words ‘Run K8s’ on it.

K8s told us to ‘walk this way,’ but the journey is not without problems. Image credit: Chris Thornett, 2018

We started 360 Cloud Platforms, as a trusted partner for Kubernetes support services, because we saw too many companies struggling with the complexity of their production Kubernetes deployments. There are many Kubernetes challenges we could select from the ‘pain drawer’ for the blog, but this time, we’ve pulled out three of the biggest challenges faced in Kubernetes that we see all too often. We will discuss others in future blogs, but these are the common Kubernetes problems.

Kubernetes Challenge #1: Keeping up with Rapid Change

After the Linux Kernel, Kubernetes (K8s) is the largest open-source project in the world. Open source software evolves faster than proprietary software, driven by community collaboration, contributors’ demands for new features, and necessary patches. K8s moves fast, for an infrastructure technology, even if it is cloud-native, with three releases a year and monthly patches.

Change means production updates, and, anecdotally, keeping up with updates is a common Kubernetes challenge we see across a wide gamut of enterprise organizations. Sometimes this is for pragmatic reasons rather than a lack of team bandwidth.

According to Datadog’s updated, November 2022, Container Report, organizations often delay updates, adopting a ‘wait and see’ approach for at least a year before upgrading to the new versions.

Lagging behind with updates is not isolated to running and maintaining open-source Kubernetes, but is common with vendored Kubernetes as well. Harnessing the benefits of a specific version of the Kubernetes source code directly within your codebase or repo has recognized benefits. This approach enables tighter integration and greater control over the K8s codebase you rely on, but running older versions can introduce new security risks and impact your environments.

This delay in updating can be because operators are concerned with losing cluster stability and want to avoid API compatibility headaches. Like all software, you can face problems as Kubernetes evolves and find that older APIs are deprecated and eventually removed. But Datadog suggests that many hosts use a Kubernetes release that’s eighteen months old, which means most are using a version of K8s that has flown past active support by half a year and maintenance support by over four months.

In our experience, we would recommend upgrading to the oldest currently supported and security-patched version of your K8s platform. Rather than delay upgrades, plan for migration to newer API versions or alternatives ahead of a deprecation deadline. Of course, that assumes you have the expertise on the team to handle the work or the resources available.

“Many hosts use a Kubernetes release that’s eighteen months old, which means most are using a version of K8s that has flown past active support”

Kubernetes Challenge #2: Keeping up with Implementation Change

A new question in VMware’s State of Kubernetes in 2023 survey revealed that the top K8s security concern this year is misconfiguration at 55% of respondents. Issues stemming from improper setup are often overlooked until they are seen post-deployment when an application runs in production and are more complicated and costly to fix.

Unfortunately, the highly customizable nature of containers and Kubernetes is a key strength, but it is also a worry. According to Red Hat’s State of Kubernetes Security Report 2023:

“The dynamic environments in which containers operate, with containers turning on and off rapidly, also make it a challenge to maintain a consistent security posture. The shared host operating system kernel and other resources also mean that a single vulnerability in one container can potentially affect other containers on the same host, while a vulnerability in the host itself can potentially affect all containers running on that host. The large number of third-party components, such as base images, libraries, and dependencies, adds yet another layer for individuals to configure and ensure that they remain free of vulnerabilities.”

Regular news stories drive home this concern; a recent security investigation uncovered 350 businesses, including Fortune 500 companies, that had misconfigured clusters which left them exposed. The researchers from Aqua Security found that 60% of the clusters were under active attack by crypto miners. The vulnerability stemmed from two misconfigurations that are commonly found in K8s but easily avoided.

A worn metal container with a large padlock on the front.
Kubernetes challenges: security concerns have become the No.1 concern, even delaying app deployments.

Kubernetes Challenge #3: Tackling Security Concerns

We have already touched on the subject, but security has become one of Kubernetes’ problems that needs addressing correctly for any organization managing K8s.

In VMware’s 2023 State of Kubernetes, it was the top issue, and in Red Hat’s 2023 State of Kubernetes Security Report, 67% of the Red Hat respondents (600 DevOps, engineers, and security professionals) indicated that they had delayed or slowed app deployments as a direct response to security concerns.

In its vanilla form, Kubernetes has many vulnerabilities that users are expected to address to secure the infrastructure. Fortunately, CIS Benchmarks is an excellent resource for determining cluster security and is baked into many tools, including the popular open-source kube-bench from Aqua Security. However, some issues that kube-bench uncovers can’t be assessed by the tool and must be investigated manually.

Clusters must be isolated and independent from each other to keep the platform secure. Kubehunter is a useful tool for detecting cluster vulnerabilities and can run internal, network, and remote scans. Implementing NetworkPolicies for all apps and services, and setting explicit ingress and egress paths for all apps is necessary. Of course, all of this requires time and resources, and, as you might expect, we’ve generally found that a vendored platform or cloud-provided managed service will help most enterprises reduce platform maintenance time.

When considering internal security controls and integrations, Kubernetes only provides basic mechanisms for managing user access and doesn’t handle other external security mechanisms, such as SSH access to nodes. Vendored platforms, for example, Rancher, provide some out-of-the-box security configurations, but the biggest risk is operational complacency, where vendored platforms aren’t frequently pathed, and there is no configuration enforcement or standardization across clusters.

“The lack of internal experience/expertise (at 57%) is the biggest challenge respondents encounter managing Kubernetes”

Infrastructure and internal platform security need regular third-party evaluations and audits, such as RBAC Risk Audit, as well as regular security reviews. For internal security, we implement scanning tools and agent-based systems for generating reports that are fed into SRE's day-to-day operations. Where an enterprise uses a Security Information and Event Management System (SIEM), you must ensure all system logs head to the SIEM.

Another downside of Kubernetes security considerations has been the cost in money and resources. Dipping into Red Hat’s security report again, 1 out of 5 respondents reported that staff were terminated because of security lapses, and 37% experienced revenue or customer loss from a container and Kubernetes security incident.

Another surprising statistic related to staffing is that the lack of internal experience/expertise (at 57%) is the biggest challenge respondents encounter managing Kubernetes. This is taken from VMware’s State of Kubernetes, again. That figure is more startling when you discover that it has increased by 13% since last year.

Contact Us

Unsurprisingly, many organizations find that working with outside talent effectively addresses the lack of in-house skills. Kubernetes challenges can be managed with the proper platform setup, and keeping K8s up to date relieves operational strain and counters security concerns that come with running Kubernetes in production.

For the next blog, we’ll explore some pain points, but in the meantime, contact us today for an informal chat to discuss your current Kubernetes pitfalls to see how we can help.

Let us deal with your Kubernetes challenges, so you can focus on driving value in your core business.

Square Pulse (1) CONNECT WITH US

 

 

Comments