How To Develop Kubernetes CLIs Like a Pro

简介: A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

A short one today. Just wanted you to meet my new favorite Go library to work with Kubernetes - k8s.io/cli-runtime. It's probably the most underappreciated package in the whole k8s.io/* family based on its value to the number of stars ratio.

Here is what the README file says about it:

Set of helpers for creating kubectl commands, as well as kubectl plugins.


This library is a shared dependency for clients to work with Kubernetes

API infrastructure which allows to maintain kubectl compatible behavior.

If the above description didn't sound too impressive, let me try to decipher it for you - with the cli-runtime library, you can write CLI tools that behave like and are as potent as the mighty kubectl!

Here is what you actually can achieve with just a few lines of code using the cli-runtime library:

  • Register the well-know flags like --kubeconfig|--context|--namespace|--server|--token|...and pass their values to one or more client-go instances.
  • Look up cluster objects by their resources, kinds, and names with the full-blown support of familiar shortcuts like deploy for deployments or po for pods.
  • Read and kustomize YAML/JSON Kubernetes manifests into the corresponding Go structs.
  • Pretty-print Kubernetes objects as YAML, JSON (with JSONPath support), and even human-readable tables!

Create Kubernetes Clients From Command-Line Flags

The de-facto standard solution for the command-line flags processing in Go is cobra. The k8s.io/cli-runtime library embraces it and builds its functionality on top of cobra.

Here is a mini-program that uses typical Kubernetes CLI flags to create a client-go instance forfurther Kubernetes API access:

package main
import (
    "fmt"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/client-go/kubernetes"
)
func main() {
    // 1. Create a flags instance.
    configFlags := genericclioptions.NewConfigFlags(true)
    // 2. Create a cobra command.
    cmd := &cobra.Command{
        Use: "kubectl (well, almost)",
        Run: func(cmd *cobra.Command, args []string) {
            // 4. Get client config from the flags.
            config, _ := configFlags.ToRESTConfig()
            // 5. Create a client-go instance for config.
            client, _ := kubernetes.NewForConfig(config)
            vinfo, _ := client.Discovery().ServerVersion()
            fmt.Println(vinfo)
        },
    }
    // 3. Register flags with cobra.
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now, let's see what flags are available:

$ go run main.go --help
Usage:
  kubectl (well, almost) [flags]
Flags:
      --as string                      Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                  UID to impersonate for the operation.
      --cache-dir string               Default cache directory (default "/home/vagrant/.kube/cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -h, --help                           help for kubectl
      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use

Quite powerful, isn't it? And only 3 lines of code (well, 4 with the import line) were needed.


Search For Cluster Objects Like a Boss

You probably know that kubectl employs a bunch of tricks to make working with resources a bit more user-friendly:

# Get resources using shortcuts
$ kubectl get po
$ kubectl get pods
$ kubectl get cm
$ kubectl get configmaps
# Get resources by singular and plural names
$ kubectl get service
$ kubectl get services
# Get multiple resource types at once
$ kubectl get po,deploy,svc
# Search by kind instead of resource name
$ kubectl get ServiceAccount
# Get an object by name
$ kubectl get service kubernetes
$ kubectl get service/kubernetes
$ kubectl get svc kubernetes
$ kubectl get svc/kubernetes
# Get resources from all namespaces
$ kubectl get pods --all-namespaces

Turns out, this handiness is actually implemented by the cli-runtime library, and it's fully reusable! You just need to instantiate a resource.Builder and let it parse the command-line argument(s) for you:

package main
import (
    "fmt"
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    // Already familiar stuff...
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use: "kubectl (even closer to it this time)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            // Our hero - The Resource Builder.
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            // Let the Builder do all the heavy-lifting.
            obj, _ := builder.
                // Scheme teaches the Builder how to instantiate resources by names.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                // Where to look up.
                NamespaceParam(namespace).
                // What to look for.
                ResourceTypeOrNameArgs(true, args...).
                // Do look up, please.
                Do().
                // Convert the result to a runtime.Object
                Object()
            fmt.Println(obj)
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

Now you can use the above mini-program much like the kubectl get command:

$ go run main.go --help
$ go run main.go po
$ go run main.go pod
$ go run main.go pods
$ go run main.go services,deployments
$ go run main.go --namespace=default service/kubernetes
$ go run main.go --namespace default service kubernetes

Interesting that the actual magic happens not so much in the builder itself but rather in the Scheme and RESTMapper modules. These are two tenants of another Kubernetes wonderful Go library called k8s.io/apimachinery. Both are kinda sorta registries utilizing the Kubernetes API Discovery information - the Scheme maps object kinds to Go structs representing Kubernetes Objects, and the RESTMappermaps resource names to kinds and vice versa.

Another interesting exemplar in the above mini-program is the runtime.Object returned by the Builder. It's a generic interface to abstract a concrete k8s.io/api struct representing a certain Kubernetes object (like Pod, Deployment, ConfigMap, etc.):


Read Kubernetes Manifests Into Go Structs

Much like with searching for cluster objects, the resource.Builder can be used to read YAML/JSON Kubernetes manifests from files, URLs, or even the stdin. The only difference is the method used to point the builder to the data source - instead of ResourceTypeOrNameArgs(), you'd need to call the method FilenameParam() supplying a resource.FilenameOptions{Filenames []string, Recursive bool, Kustomize bool} parameter:

package main
import (
    "os"
    "github.com/spf13/cobra"
    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/cli-runtime/pkg/resource"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    configFlags := genericclioptions.NewConfigFlags(true)
    cmd := &cobra.Command{
        Use:  "kubectl (not really)",
        Args: cobra.MinimumNArgs(1),
        Run: func(cmd *cobra.Command, args []string) {
            builder := resource.NewBuilder(configFlags)
            namespace := ""
            if configFlags.Namespace != nil {
                namespace = *configFlags.Namespace
            }
            enforceNamespace := namespace != ""
            _ = builder.
                WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...).
                NamespaceParam(namespace).
                DefaultNamespace().
                FilenameParam(
                    enforceNamespace,
                    &resource.FilenameOptions{Filenames: args},
                ).
                Do().
                Visit(func(info *resource.Info, _ error) error {
                    fmt.Println(info.Object)
                    return nil
                })
        },
    }
    configFlags.AddFlags(cmd.PersistentFlags())
    _ = cmd.Execute()
}

If the additional namespace parameter is provided and (some) objects in the manifests miss the namespace, it'll be populated from that parameter. The enforceNamespace flag can be used to make the builder fail if the supplied namespace value differs from the (explicitly set) values in the manifest(s).


Pretty-Print Kubernetes Object as YAML/JSON/Tables

If you tried the above mini-programs, you probably noticed how poor was the output formatting. Luckily, the cli-runtime library has a bunch of (pretty-)printers that can be used to dump Kubernetes objects as YAML, JSON, or even human-readable tables:

package main
import (
    "fmt"
    "os"
    corev1 "k8s.io/api/core/v1"
    "k8s.io/cli-runtime/pkg/printers"
    "k8s.io/client-go/kubernetes/scheme"
)
func main() {
    obj := &corev1.ConfigMap{
        Data: map[string]string{"foo": "bar"},
    }
    obj.Name = "my-cm"
    // YAML
    fmt.Println("# YAML ConfigMap representation")
    printr := printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.YAMLPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSON
    fmt.Println("# JSON ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.JSONPrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Table (human-readable)
    fmt.Println("# Table ConfigMap representation")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printers.NewTablePrinter(printers.PrintOptions{}))
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // JSONPath
    fmt.Println("# ConfigMap.data.foo")
    printr, err := printers.NewJSONPathPrinter("{.data.foo}")
    if err != nil {
        panic(err.Error())
    }
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(printr)
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
    // Name-only
    fmt.Println("# <kind>/<name>")
    printr = printers.NewTypeSetter(scheme.Scheme).ToPrinter(&printers.NamePrinter{})
    if err := printr.PrintObj(obj, os.Stdout); err != nil {
        panic(err.Error())
    }
    fmt.Println()
}

Notice how all the PrintObj() methods accept the already familiar runtime.Object type making the printing functionality generic.


相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
1月前
|
人工智能 算法 调度
阿里云ACK托管集群Pro版共享GPU调度操作指南
本文介绍在阿里云ACK托管集群Pro版中,如何通过共享GPU调度实现显存与算力的精细化分配,涵盖前提条件、使用限制、节点池配置及任务部署全流程,提升GPU资源利用率,适用于AI训练与推理场景。
222 2
|
Kubernetes Cloud Native 安全
阿里云原生容器服务产品体系-ACK Pro 托管集群
阿里云原生容器服务产品体系-ACK Pro 托管集群
阿里云原生容器服务产品体系-ACK Pro 托管集群
|
存储 API 调度
ACK Edge集群Pro
ACK Edge集群Pro版支持LVM(Logical Volume Manager)本地存储,这是一种自动化管理逻辑卷生命周期的技术,能够根据节点LVM本地存储容量进行调度。使用LVM,您只需定义节点本地盘的拓扑关系,然后LVM会自动管理卷的创建、扩展和缩小等操作。
146 2
|
边缘计算 运维 监控
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
阿里云容器服务一直探索如何更好支撑混合云、云边一体的分布式云架构和全球化应用交付,帮助广大企业降本提效。
1313 0
加码企业云原生进化,解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 运维 监控
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
云原生技术不但可以最大化云的弹性,帮助企业实现降本提效;而且还意味着更多创新的想象空间, 云原生将和 AI, 边缘计算,机密计算等新技术相结合,为数字经济构建智能、互联、信任的创新基础设施。
高可靠、高安全、高性能调度……深度解读阿里云容器新品ACK Pro与ACK@Edge
|
边缘计算 人工智能 运维
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
提前剧透,KubeCon 2020 峰会上阿里云要发布什么企业级‘大杀器’?
3857 0
KubeCon 速递 | 云原生操作系统进化,详解阿里云ACK Pro、ASM、ACR EE、ACK@Edge等四款企业级容器新品
|
弹性计算 Kubernetes 安全
ACK Pro,更适合Pro的你
对于服务的质量和运维自动化一定要求的客户可以选择ACK Pro版,否则可以选择免费版继续薅阿里羊毛。
780 0
|
1月前
|
弹性计算 监控 调度
ACK One 注册集群云端节点池升级:IDC 集群一键接入云端 GPU 算力,接入效率提升 80%
ACK One注册集群节点池实现“一键接入”,免去手动编写脚本与GPU驱动安装,支持自动扩缩容与多场景调度,大幅提升K8s集群管理效率。
223 89
|
6月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
ACK One 的多集群应用分发,可以最小成本地结合您已有的单集群 CD 系统,无需对原先应用资源 YAML 进行修改,即可快速构建成多集群的 CD 系统,并同时获得强大的多集群资源调度和分发的能力。
272 9
|
6月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
本文介绍如何利用阿里云的分布式云容器平台ACK One的多集群应用分发功能,结合云效CD能力,快速将单集群CD系统升级为多集群CD系统。通过增加分发策略(PropagationPolicy)和差异化策略(OverridePolicy),并修改单集群kubeconfig为舰队kubeconfig,可实现无损改造。该方案具备多地域多集群智能资源调度、重调度及故障迁移等能力,帮助用户提升业务效率与可靠性。

推荐镜像

更多
下一篇
oss云网关配置