How To Develop Kubernetes CLIs Like a Pro

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

Here is what the README file says about it:

Set of helpers for creating kubectl commands, as well as kubectl plugins.


This library is a shared dependency for clients to work with Kubernetes

API infrastructure which allows to maintain kubectl compatible behavior.

If the above description didn't sound too impressive, let me try to decipher it for you - with the cli-runtime library, you can write CLI tools that behave like and are as potent as the mighty kubectl!

Here is what you actually can achieve with just a few lines of code using the cli-runtime library:

  • Register the well-know flags like --kubeconfig|--context|--namespace|--server|--token|...and pass their values to one or more client-go instances.
  • Look up cluster objects by their resources, kinds, and names with the full-blown support of familiar shortcuts like deploy for deployments or po for pods.
  • Read and kustomize YAML/JSON Kubernetes manifests into the corresponding Go structs.
  • Pretty-print Kubernetes objects as YAML, JSON (with JSONPath support), and even human-readable tables!

Create Kubernetes Clients From Command-Line Flags

The de-facto standard solution for the command-line flags processing in Go is cobra. The k8s.io/cli-runtime library embraces it and builds its functionality on top of cobra.

Here is a mini-program that uses typical Kubernetes CLI flags to create a client-go instance forfurther Kubernetes API access:

package main
import (
    "fmt"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/client-go/kubernetes"
)
func main() {
    // 1. Create a flags instance.
    configFlags := genericclioptions.NewConfigFlags(true)
    // 2. Create a cobra command.
    cmd := &cobra.Command{
        Use: "kubectl (well, almost)",
        Run: func(cmd *cobra.Command, args []string) {
            // 4. Get client config from the flags.
            config, _ := configFlags.ToRESTConfig()
            // 5. Create a client-go instance for config.
            client, _ := kubernetes.NewForConfig(config)
            vinfo, _ := client.Discovery().ServerVersion()
            fmt.Println(vinfo)
        },
    }
    // 3. Register flags with cobra.
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now, let's see what flags are available:

$ go run main.go --help
Usage:
  kubectl (well, almost) [flags]
Flags:
      --as string                      Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                  UID to impersonate for the operation.
      --cache-dir string               Default cache directory (default "/home/vagrant/.kube/cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -h, --help                           help for kubectl
      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use

Quite powerful, isn't it? And only 3 lines of code (well, 4 with the import line) were needed.


Search For Cluster Objects Like a Boss

You probably know that kubectl employs a bunch of tricks to make working with resources a bit more user-friendly:

# Get resources using shortcuts
$ kubectl get po
$ kubectl get pods
$ kubectl get cm
$ kubectl get configmaps
# Get resources by singular and plural names
$ kubectl get service
$ kubectl get services
# Get multiple resource types at once
$ kubectl get po,deploy,svc
# Search by kind instead of resource name
$ kubectl get ServiceAccount
# Get an object by name
$ kubectl get service kubernetes
$ kubectl get service/kubernetes
$ kubectl get svc kubernetes
$ kubectl get svc/kubernetes
# Get resources from all namespaces
$ kubectl get pods --all-namespaces

Turns out, this handiness is actually implemented by the cli-runtime library, and it's fully reusable! You just need to instantiate a resource.Builder and let it parse the command-line argument(s) for you:

package main
import (
    "fmt"
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    // Already familiar stuff...
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use: "kubectl (even closer to it this time)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            // Our hero - The Resource Builder.
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            // Let the Builder do all the heavy-lifting.
            obj, _ := builder.
                // Scheme teaches the Builder how to instantiate resources by names.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                // Where to look up.
                NamespaceParam(namespace).
                // What to look for.
                ResourceTypeOrNameArgs(true, args...).
                // Do look up, please.
                Do().
                // Convert the result to a runtime.Object
                Object()
            fmt.Println(obj)
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now you can use the above mini-program much like the kubectl get command:

$ go run main.go --help
$ go run main.go po
$ go run main.go pod
$ go run main.go pods
$ go run main.go services,deployments
$ go run main.go --namespace=default service/kubernetes
$ go run main.go --namespace default service kubernetes

Interesting that the actual magic happens not so much in the builder itself but rather in the Scheme and RESTMapper modules. These are two tenants of another Kubernetes wonderful Go library called k8s.io/apimachinery. Both are kinda sorta registries utilizing the Kubernetes API Discovery information - the Scheme maps object kinds to Go structs representing Kubernetes Objects, and the RESTMappermaps resource names to kinds and vice versa.

Another interesting exemplar in the above mini-program is the runtime.Object returned by the Builder. It's a generic interface to abstract a concrete k8s.io/api struct representing a certain Kubernetes object (like Pod, Deployment, ConfigMap, etc.):


Read Kubernetes Manifests Into Go Structs

Much like with searching for cluster objects, the resource.Builder can be used to read YAML/JSON Kubernetes manifests from files, URLs, or even the stdin. The only difference is the method used to point the builder to the data source - instead of ResourceTypeOrNameArgs(), you'd need to call the method FilenameParam() supplying a resource.FilenameOptions{Filenames []string, Recursive bool, Kustomize bool} parameter:

package main
import (
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use:  "kubectl (not really)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            enforceNamespace := namespace != ""
            _ = builder.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                NamespaceParam(namespace).
                DefaultNamespace().
                FilenameParam(
                    enforceNamespace,
                    &resource.FilenameOptions{Filenames: args},
                ).
                Do().
                Visit(func(info *resource.Info, _ error) error {
                    fmt.Println(info.Object)
                    return nil
                })
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

If the additional namespace parameter is provided and (some) objects in the manifests miss the namespace, it'll be populated from that parameter. The enforceNamespace flag can be used to make the builder fail if the supplied namespace value differs from the (explicitly set) values in the manifest(s).


Pretty-Print Kubernetes Object as YAML/JSON/Tables

If you tried the above mini-programs, you probably noticed how poor was the output formatting. Luckily, the cli-runtime library has a bunch of (pretty-)printers that can be used to dump Kubernetes objects as YAML, JSON, or even human-readable tables:

package main
import (
    "fmt"
    "os"
    corev1 "k8s.io/api/core/v1"
    "k8s.io/cli-runtime/pkg/printers"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    obj := &corev1.ConfigMap{
        Data: map[string]string{"foo": "bar"},
    }
    obj.Name = "my-cm"
    // YAML
    fmt.Println("# YAML ConfigMap representation")
    printr := printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.YAMLPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSON
    fmt.Println("# JSON ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.JSONPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Table (human-readable)
    fmt.Println("# Table ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printers.NewTablePrinter(printers.PrintOptions{}))
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSONPath
    fmt.Println("# ConfigMap.data.foo")
    printr, err := printers.NewJSONPathPrinter("{.data.foo}")
    if err != nil {
        panic(err.Error())
    }
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printr)
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Name-only
    fmt.Println("# <kind>/<name>")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.NamePrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
}

Notice how all the PrintObj() methods accept the already familiar runtime.Object type making the printing functionality generic.


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
Kubernetes Cloud Native 安全
阿里云原生容器服务产品体系-ACK Pro 托管集群
阿里云原生容器服务产品体系-ACK Pro 托管集群
阿里云原生容器服务产品体系-ACK Pro 托管集群
|
6月前
|
存储 API 调度
ACK Edge集群Pro
ACK Edge集群Pro版支持LVM(Logical Volume Manager)本地存储,这是一种自动化管理逻辑卷生命周期的技术,能够根据节点LVM本地存储容量进行调度。使用LVM,您只需定义节点本地盘的拓扑关系,然后LVM会自动管理卷的创建、扩展和缩小等操作。
65 2
|
边缘计算 运维 监控
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
阿里云容器服务一直探索如何更好支撑混合云、云边一体的分布式云架构和全球化应用交付,帮助广大企业降本提效。
1206 0
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 运维 监控
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
云原生技术不但可以最大化云的弹性,帮助企业实现降本提效;而且还意味着更多创新的想象空间, 云原生将和 AI, 边缘计算,机密计算等新技术相结合,为数字经济构建智能、互联、信任的创新基础设施。
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 人工智能 运维
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
提前剧透,KubeCon 2020 峰会上阿里云要发布什么企业级‘大杀器’?
3659 0
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
|
弹性计算 Kubernetes 安全
ACK Pro,更适合Pro的你
对于服务的质量和运维自动化一定要求的客户可以选择ACK Pro版,否则可以选择免费版继续薅阿里羊毛。
634 0
|
23天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
24天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
1月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
122 17
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
471 1