How To Develop Kubernetes CLIs Like a Pro

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

Here is what the README file says about it:

Set of helpers for creating kubectl commands, as well as kubectl plugins.


This library is a shared dependency for clients to work with Kubernetes

API infrastructure which allows to maintain kubectl compatible behavior.

If the above description didn't sound too impressive, let me try to decipher it for you - with the cli-runtime library, you can write CLI tools that behave like and are as potent as the mighty kubectl!

Here is what you actually can achieve with just a few lines of code using the cli-runtime library:

  • Register the well-know flags like --kubeconfig|--context|--namespace|--server|--token|...and pass their values to one or more client-go instances.
  • Look up cluster objects by their resources, kinds, and names with the full-blown support of familiar shortcuts like deploy for deployments or po for pods.
  • Read and kustomize YAML/JSON Kubernetes manifests into the corresponding Go structs.
  • Pretty-print Kubernetes objects as YAML, JSON (with JSONPath support), and even human-readable tables!

Create Kubernetes Clients From Command-Line Flags

The de-facto standard solution for the command-line flags processing in Go is cobra. The k8s.io/cli-runtime library embraces it and builds its functionality on top of cobra.

Here is a mini-program that uses typical Kubernetes CLI flags to create a client-go instance forfurther Kubernetes API access:

package main
import (
    "fmt"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/client-go/kubernetes"
)
func main() {
    // 1. Create a flags instance.
    configFlags := genericclioptions.NewConfigFlags(true)
    // 2. Create a cobra command.
    cmd := &cobra.Command{
        Use: "kubectl (well, almost)",
        Run: func(cmd *cobra.Command, args []string) {
            // 4. Get client config from the flags.
            config, _ := configFlags.ToRESTConfig()
            // 5. Create a client-go instance for config.
            client, _ := kubernetes.NewForConfig(config)
            vinfo, _ := client.Discovery().ServerVersion()
            fmt.Println(vinfo)
        },
    }
    // 3. Register flags with cobra.
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now, let's see what flags are available:

$ go run main.go --help
Usage:
  kubectl (well, almost) [flags]
Flags:
      --as string                      Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                  UID to impersonate for the operation.
      --cache-dir string               Default cache directory (default "/home/vagrant/.kube/cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -h, --help                           help for kubectl
      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use

Quite powerful, isn't it? And only 3 lines of code (well, 4 with the import line) were needed.


Search For Cluster Objects Like a Boss

You probably know that kubectl employs a bunch of tricks to make working with resources a bit more user-friendly:

# Get resources using shortcuts
$ kubectl get po
$ kubectl get pods
$ kubectl get cm
$ kubectl get configmaps
# Get resources by singular and plural names
$ kubectl get service
$ kubectl get services
# Get multiple resource types at once
$ kubectl get po,deploy,svc
# Search by kind instead of resource name
$ kubectl get ServiceAccount
# Get an object by name
$ kubectl get service kubernetes
$ kubectl get service/kubernetes
$ kubectl get svc kubernetes
$ kubectl get svc/kubernetes
# Get resources from all namespaces
$ kubectl get pods --all-namespaces

Turns out, this handiness is actually implemented by the cli-runtime library, and it's fully reusable! You just need to instantiate a resource.Builder and let it parse the command-line argument(s) for you:

package main
import (
    "fmt"
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    // Already familiar stuff...
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use: "kubectl (even closer to it this time)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            // Our hero - The Resource Builder.
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            // Let the Builder do all the heavy-lifting.
            obj, _ := builder.
                // Scheme teaches the Builder how to instantiate resources by names.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                // Where to look up.
                NamespaceParam(namespace).
                // What to look for.
                ResourceTypeOrNameArgs(true, args...).
                // Do look up, please.
                Do().
                // Convert the result to a runtime.Object
                Object()
            fmt.Println(obj)
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now you can use the above mini-program much like the kubectl get command:

$ go run main.go --help
$ go run main.go po
$ go run main.go pod
$ go run main.go pods
$ go run main.go services,deployments
$ go run main.go --namespace=default service/kubernetes
$ go run main.go --namespace default service kubernetes

Interesting that the actual magic happens not so much in the builder itself but rather in the Scheme and RESTMapper modules. These are two tenants of another Kubernetes wonderful Go library called k8s.io/apimachinery. Both are kinda sorta registries utilizing the Kubernetes API Discovery information - the Scheme maps object kinds to Go structs representing Kubernetes Objects, and the RESTMappermaps resource names to kinds and vice versa.

Another interesting exemplar in the above mini-program is the runtime.Object returned by the Builder. It's a generic interface to abstract a concrete k8s.io/api struct representing a certain Kubernetes object (like Pod, Deployment, ConfigMap, etc.):


Read Kubernetes Manifests Into Go Structs

Much like with searching for cluster objects, the resource.Builder can be used to read YAML/JSON Kubernetes manifests from files, URLs, or even the stdin. The only difference is the method used to point the builder to the data source - instead of ResourceTypeOrNameArgs(), you'd need to call the method FilenameParam() supplying a resource.FilenameOptions{Filenames []string, Recursive bool, Kustomize bool} parameter:

package main
import (
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use:  "kubectl (not really)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            enforceNamespace := namespace != ""
            _ = builder.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                NamespaceParam(namespace).
                DefaultNamespace().
                FilenameParam(
                    enforceNamespace,
                    &resource.FilenameOptions{Filenames: args},
                ).
                Do().
                Visit(func(info *resource.Info, _ error) error {
                    fmt.Println(info.Object)
                    return nil
                })
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

If the additional namespace parameter is provided and (some) objects in the manifests miss the namespace, it'll be populated from that parameter. The enforceNamespace flag can be used to make the builder fail if the supplied namespace value differs from the (explicitly set) values in the manifest(s).


Pretty-Print Kubernetes Object as YAML/JSON/Tables

If you tried the above mini-programs, you probably noticed how poor was the output formatting. Luckily, the cli-runtime library has a bunch of (pretty-)printers that can be used to dump Kubernetes objects as YAML, JSON, or even human-readable tables:

package main
import (
    "fmt"
    "os"
    corev1 "k8s.io/api/core/v1"
    "k8s.io/cli-runtime/pkg/printers"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    obj := &corev1.ConfigMap{
        Data: map[string]string{"foo": "bar"},
    }
    obj.Name = "my-cm"
    // YAML
    fmt.Println("# YAML ConfigMap representation")
    printr := printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.YAMLPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSON
    fmt.Println("# JSON ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.JSONPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Table (human-readable)
    fmt.Println("# Table ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printers.NewTablePrinter(printers.PrintOptions{}))
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSONPath
    fmt.Println("# ConfigMap.data.foo")
    printr, err := printers.NewJSONPathPrinter("{.data.foo}")
    if err != nil {
        panic(err.Error())
    }
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printr)
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Name-only
    fmt.Println("# <kind>/<name>")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.NamePrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
}

Notice how all the PrintObj() methods accept the already familiar runtime.Object type making the printing functionality generic.


相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
9月前
|
Kubernetes Cloud Native 安全
阿里云原生容器服务产品体系-ACK Pro 托管集群
阿里云原生容器服务产品体系-ACK Pro 托管集群
280 0
阿里云原生容器服务产品体系-ACK Pro 托管集群
|
1月前
|
存储 API 调度
ACK Edge集群Pro
ACK Edge集群Pro版支持LVM(Logical Volume Manager)本地存储,这是一种自动化管理逻辑卷生命周期的技术,能够根据节点LVM本地存储容量进行调度。使用LVM,您只需定义节点本地盘的拓扑关系,然后LVM会自动管理卷的创建、扩展和缩小等操作。
41 2
|
边缘计算 运维 监控
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
阿里云容器服务一直探索如何更好支撑混合云、云边一体的分布式云架构和全球化应用交付,帮助广大企业降本提效。
1161 0
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 运维 监控
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
云原生技术不但可以最大化云的弹性,帮助企业实现降本提效;而且还意味着更多创新的想象空间, 云原生将和 AI, 边缘计算,机密计算等新技术相结合,为数字经济构建智能、互联、信任的创新基础设施。
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 人工智能 运维
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
提前剧透,KubeCon 2020 峰会上阿里云要发布什么企业级‘大杀器’?
3613 0
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
|
弹性计算 Kubernetes 安全
ACK Pro,更适合Pro的你
对于服务的质量和运维自动化一定要求的客户可以选择ACK Pro版,否则可以选择免费版继续薅阿里羊毛。
585 0
|
26天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
379 2
Aspire项目发布到远程k8s集群
|
15天前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
201 3
|
16小时前
|
Kubernetes 应用服务中间件 nginx
K8s高可用集群二进制部署-V1.20
2.4 部署Etcd集群 以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3. 1. 创建工作目录并解压二进制包 mkdir /opt/etcd/{bin,cfg,ssl} -p tar zxvf etcd-v3.4.9-linux-amd64.tar.gz mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
|
6天前
|
Kubernetes 算法 API
K8S 集群认证管理
【6月更文挑战第22天】Kubernetes API Server通过REST API管理集群资源,关键在于客户端身份认证和授权。