Press "Enter" to skip to content

Solving access problems after GKE cluster upgrade to v1.3

In this post I’m just going to briefly describe a work-around to a problem I encountered when upgrading our Kubernetes cluster on Google Container Engine (GKE).

The Problem

The problem occurred after upgrading from Kubernetes version 1.2.5 to 1.3.5 on Google Container Engine. After this upgrade I could only perform read operations on the cluster with my user account. For example I could list all the pods just fine by doing:

but as soon I wanted to do something else like deleting a pod or replication controller the following error was shown:

Usually one simply calls

to get the credentials but this didn’t make any difference. After quite a bit of research I turned to the GKE Slack channel (#google-containers) and luckily Jeff Hodges (@jmhodges) pointed me in the right direction. It turns out that starting in Kubernetes v1.3 GKE users can authenticate to the Kubernetes API on their cluster using Google OAuth2 access tokens. But something is/was broken on the GKE when upgrading the cluster which meant that I could no longer authenticate correctly.

The Solution

The documentation indicates that you can revert to using the legacy cluster certificate or username/password that you used in the previous version to authenticate. This turns out to be the work-around I was looking for. What one should do is to run these two commands:

Afterwards make sure to get the credentials again:

Now you should be able to delete pods again! To make this setting permanent you should add “export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True” to your .bashrc or .bash_profile.

2 Comments

  1. Andrei Andrei October 9, 2016

    I couldn’t not read/write to the server via kubectl, and resetting that prop made it work. Thakn you!

Leave a Reply

Your email address will not be published. Required fields are marked *