days
-6
-7
hours
-1
-1
minutes
-3
-2
seconds
-2
0
search
This vulnerability affects Kubernetes 1.10 and higher

Critical Kubernetes flaw allows any user to access administrative controls

Jane Elizabeth
Kubernetes
© Shutterstock / BeeBright

Kubernetes has finally hit the worst milestone: their first major security flaw. This vulnerability allows any user to escalate their administrative privileges and attack any container running on the same pod. Even worse, there’s no simple way to tell if you’ve been affected.

Grim news from Red Hat – Kubernetes has identified its first major security flaw. This vulnerability affecting Kubernetes 1.10 and higher was publicly disclosed on GitHub last week.

Basically, the flaw allows any user to escalate their privileges to access administrative controls through the Kubernetes API server. With this, they can create requests authenticated by Kubernetes’ own TLS credentials and mess with any container running on the same pod.

While there’s a patch up already, it looks like this flaw is going to cause some pretty significant soul searching (and log searching). Kubernetes is one of the most popular open source projects today; it’s estimated that around 70% of all enterprises have adopted Kubernetes containers.

With such a large number of targets, it’s likely someone has already been hit. So, let’s get into the details and what this means for developers.

SEE ALSO: “There is a lot of fear in general surrounding cyber security”

What’s the vulnerability?

CVE-2018-1002105 is a real doozy of a flaw with two attack vectors.

Any and all users can establish a connection through the Kubernetes API server to a backend server with a carefully crafted request. With this connection, they can also send arbitrary requests over the same connection directly to the backend. These requests are thus authenticated with Kubernetes’ own TLS credentials to establish the connection.

As a result, an API call to call to any aggregated API server endpoint can be escalated to perform any API request against that aggregated API server because it was directly accessible from the Kubernetes API server’s network.

In the default configuration, all users (both authenticated and unauthenticated) can perform discovery APIs to create this escalation.

Additionally, a pod exec/attach/portforward API call can be escalated to perform any API request against the kubelet API on the node specified in the pod spec . This allows the user to do things like listing all pods on the node, running arbitrary commands inside those pods, and obtaining the command output.

Essentially, by calling the right number, this vulnerability allows miscreants to access the root of your clusters, unwittingly authenticated by Kubernetes itself.

We talked to George Gerchow, Chief Security Officer at Sumo Logic about this flaw. Here’s what he had to say:

The Kubernetes vulnerability is a huge deal, even more so when you think about its scale of exposure. What makes Kubernetes great is its fundamental speed, orchestration, automation and scale. All of those qualities become an instant detriment when a security issue arises as they rapidly extend the reach of the attack. 

With that said, any well-versed security professional would expect this to happen, as emerging technology is notoriously known to treat security as an afterthought.

Looking at it from a bigger picture, this is another example of how development and security teams need to work together through DevSecOps to establish guardrails and best practices while maintaining agility. Most organizations lack visibility into the proper security and configuration of not just its containers, but the CI/CD pipeline as a whole.”

Moving forward, developers must pay close attention to uniquely identified logs by leveraging machine learning. This will help proactively identify these potential attacks as the requests appear in the kubelet or aggregated API server logs, that would otherwise be indistinguishable from correctly authorized and proxied requests via the Kubernetes API server. If developers — and digital organizations as a whole — are not able to correctly identify bad behaviour via logs, that is a major flaw.

– George Gerchow, Chief Security Officer, Sumo Logic

How bad is it?

It’s pretty bad.

Red Hat has classified this flaw as critical with a 9.8/10 rating. This is because the attack is not especially complex and it can be executed remotely. There aren’t even any user interactions or special privileges required.

Additionally, since the miscreants escalated their administrative privileges and are making these unauthorized requests over an established connection, it’s not logged in the Kubernetes API server audit logs or server log.

“There is no simple way to detect whether this vulnerability has been used,” wrote Jordan Liggitt, the staff software engineer at Google who initially publicized the flaw on GitHub.

However, although these requests do show up in the kubelet or aggregated API server logs, they are virtually indistinguishable from any correctly authorized and proxied requests via the Kubernetes API server.  That means you need to go through your logs with a fine-toothed comb in order to even see if you’ve been affected.

SEE ALSO: Good coding practices mean good data security

Who’s affected?

A lot of people, unfortunately. The affected components include the Kubernetes API server, meaning this got pretty much everyone. Legit, this affects any program that uses Kubernetes.

In particular, this affects any configuration with clusters that run extension API servers that are directly accessible from the Kubernetes API server’s network. Got a metrics server? You’re affected.

Additionally, this also affects any clusters that grant pod exec/attach/portforward permissions to users that are not expected to have full access to kubelet APIs.

Here are the affected versions:

  • Kubernetes v1.0.x-1.9.x
  • Kubernetes v1.10.0-1.10.10 (fixed in v1.10.11)
  • Kubernetes v1.11.0-1.11.4 (fixed in v1.11.5)
  • Kubernetes v1.12.0-1.12.2 (fixed in v1.12.3)

There is a patch of sunshine in all this gloom. This flaw is fixed in the following Kubernetes clusters:

  • 10.11
  • 11.5
  • 12.3
  • 13.0-rc.1

SEE ALSO: Kubernetes adoption hasn’t exploded yet, new study shows

How can it be fixed? Or, barring that, mitigated?

Thankfully, there is a fix. You need to update your Kubernetes version now to the patched versions v1.10.11, v1.11.5, v1.12.3, and v1.13.0-rc.1.

Don’t want to upgrade? There are patches, but you’re not going to like it. Jordan Liggett called them “disruptive”, which is something of a mild way to put it.

  • Suspend the use of aggregated API servers. This will disrupt users of the APIs provided by the aggregated server.
  • Disable anonymous requests by passing --anonymous-auth=false to the kube-apiserver. This will probably disrupt load balancer or kubelet health checks of the kube-apiserver, and break kubeadm join setup flows.
  • Remove all anonymous access to all aggregated APIs. Obviously, this includes discovery permissions granted by the default discovery role bindings.
  • Remove all access to all aggregated APIs from users that should not have full access to the aggregated APIs. This may disrupt users and controllers that make use of discovery information to map API types to URLs.
  • Remove pod exec/attach/portforward permissions from users that should not have full access to the kubelet API.

However, given how widely this flaw has been publicized, it’s only a matter of time that it’ll be abused. And, unless you’re watching your kubelet or aggregated API server logs very carefully, you won’t even realize you’ve been attacked until too late.

The only way to deal with this is to upgrade your Kubernetes version to a patched version as soon as humanly possible.

Author
Jane Elizabeth
Jane Elizabeth is an assistant editor for JAXenter.com.