GKE configuration discovery for connecting from outside of the cluster

Christoph Grotz
Google Cloud - Community
2 min readSep 30, 2022

--

Sometimes, you want to change or inspect resources on your GKE cluster from outside the cluster. For example you might have a simple Cloud Function or Cloud Run service, that schedules a Pod for you. The challenge can be in regards to setting up the configuration properly to connect to the Kubernetes Master API.

The high level setup this code could be useful

One approach could be to bake a K8S config file into the container or your function code, but this means that you have an environment specific container or function. That’s not really what we want, since it limits transferability of our deployable.

A better approach in my opinion is to use the GKE API to discover the information we need from the cluster. The first step is loading the GKE Cluster information, we can use the gcloud sdk for Golang for that:

func getClusterFromGkeApi(clusterName string) (*containerpb.Cluster, error) {
ctx := context.Background()
c, err := container.NewClusterManagerClient(ctx)
if err != nil {
return nil, err
}
req := &containerpb.GetClusterRequest{
Name: clusterName,
}
cluster, err := c.GetCluster(ctx, req)
if err != nil {
return nil, err
}
defer c.Close()
return cluster, err
}

The clusterName is the full-qualified name of the cluster and takes the shape projects/<project_id>/locations/<zone or region>/clusters/<cluster_name> . Next we use the cluster to create the K8S configuration:

func autoDiscoverGkeConfig() (*clientcmdapi.Config, error) {
cluster, err := getClusterFromGkeApi(*gkeClusterName)
if err != nil {
log.Printf(“Failed retrieving cluster via API %v”, err)
return nil, fmt.Errorf(“unable to retrieve cluster”)
}
config, err := createKubeConfig(cluster.Endpoint, cluster.MasterAuth.ClusterCaCertificate)
if err != nil {
log.Printf(“Failed creating cluster config %v”, err)
return nil, fmt.Errorf(“unable to create config”)
}
return config, nil
}

You can find the createKubeConfig function in the complete example below, it’s just generating the config structure for us. We can now use autoDiscoverGkeConfig when creating a K8S client:

config, err = clientcmd.BuildConfigFromKubeconfigGetter(“”, autoDiscoverGkeConfig)
if err != nil {
log.Fatalf(“failed creating k8s client via autodiscovery %v”, err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatalf(“failed creating the clientset %v”, err.Error())
}

Here is the full example, I added some convenience functionality around it to make it a better and standalone example. Without providing any configuration, it will try your “normal” kubeconfig file in your user home. But when providing a GKE cluster name via the gke_cluster_name flag, it will use the above discovery mechanism, this makes the configuration mechanism portable across your whole environment.

--

--

Christoph Grotz
Google Cloud - Community

I’m a technology enthusiast and focusing on Digital Transformation, Internet of Things and Cloud.