Kubernetes深入学习之二:编译和部署镜像(api-server)

本文涉及的产品
日志服务 SLS,月写入数据量 50GB 1个月
简介: 在k8s的源码包中,除了kubectl这样的可执行程序,还有api-server、controller-manager这些docker容器,今天的实战是修改这些容器镜像的源码,再部署新的镜像,验证我们修改的代码是否生效

欢迎访问我的GitHub

这里分类和汇总了欣宸的全部原创(含配套源码): https://github.com/zq2599/blog_demos

本篇概览

  • 本文是《Kubernetes深入学习》系列的第二篇,上一章我们下载了Kubernetes1.13源码,然后修改kubectl源码再构建运行进行验证,在整个源码包中,除了kubectl这样的可执行程序,还有api-server、controller-manager这些docker容器,今天的实战是修改这些容器镜像的源码,再部署新的镜像,验证我们修改的代码是否生效;

系列文章链接汇总

  1. 《Kubernetes源码学习之一:下载和编译源码》
  2. 《Kubernetes深入学习之二:编译和部署镜像(api-server)》

环境信息

  • 为了验证修改的结果在Kubernetes环境是否生效,需要您准备好Kubernetes1.13版本的环境,实战中涉及的应用和版本信息如下:
  1. 操作系统:CentOS 7.6.1810
  2. go版本:1.12
  3. Docker:17.03.2-ce
  4. Kubernetes:1.13

关于依赖镜像的下载

  • 在编译过程中会用到以下三个镜像,但是docker pull命令是无法下载到这些镜像的(你懂的):
  1. k8s.gcr.io/kube-cross:v1.11.5-1
  2. k8s.gcr.io/debian-iptables-amd64:v11.0
  3. k8s.gcr.io/debian-base-amd64:0.4.0
  • 如果您的环境无法下载这三个镜像,可通过以下方式来下载:
  • 执行以下命令,下载我上传的三个镜像:
docker pull bolingcavalry/kube-cross:v1.11.5-1 \
&& docker pull bolingcavalry/debian-iptables-amd64:v11.0 \
&& docker pull bolingcavalry/debian-base-amd64:0.4.0
  • 下载完毕后,通过docker images命令可以看到这三个镜像:
[root@hedy kubernetes]# docker images
REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
bolingcavalry/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GB
bolingcavalry/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MB
bolingcavalry/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
  • 执行以下命令,将下载的镜像更名,并且删除不再用到的镜像:
docker tag b16987a9b305 k8s.gcr.io/kube-cross:v1.11.5-1 \
&& docker tag 48319fdf4d25 k8s.gcr.io/debian-iptables-amd64:v11.0 \
&& docker tag 8021d54711e6 k8s.gcr.io/debian-base-amd64:0.4.0 \
&& docker rmi bolingcavalry/kube-cross:v1.11.5-1 \
&& docker rmi bolingcavalry/debian-iptables-amd64:v11.0 \
&& docker rmi bolingcavalry/debian-base-amd64:0.4.0
  • 此时再执行docker images查看本地镜像,可见正是编译所需那三个:
[root@hedy kubernetes]# docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GB
k8s.gcr.io/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MB
k8s.gcr.io/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
  • 打开文件build/lib/release.sh,找到下面这段内容,将其中的---pull删除,这样就不会重新去远程下载镜像了:
"${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" "${docker_build_path}" >/dev/null
  • 这段代码的具体位置如下图绿框所示,将绿框中的内容删除:

在这里插入图片描述

  • 至此准备工作已结束,接下来就是修改了;

修改源码

  • 接下来的工作是修改源码,本次实战要修改的是api-server的源码,我们在里面加一些日志,最后在验证环节只要能看见这些日志就说明我们修改的源码可以成功运行;
  • 修改的文件是create.go路径如下,这个文件是创建资源的响应入口:
$GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/create.go
  • 在create.go处理请求的位置增加日志代码,如下所示,所有fmt.Println的调用都是本次新增的内容:
func createHandler(r rest.NamedCreater, scope RequestScope, admit admission.Interface, includeName bool) http.HandlerFunc {
    return func(w http.ResponseWriter, req *http.Request) {
        fmt.Println("***********************************************************************************************")
        fmt.Println("start create", req)
        fmt.Println("-----------------------------------------------------------------------------------------------")
        fmt.Printf("%s\n", debug.Stack())
        fmt.Println("***********************************************************************************************")
  • 上述代码的作用是在api-server接收到创建资源的请求时打印日志,日志内容是http请求内容和当前方法的调用堆栈打印出来;

开始构建

  • 进入目录$GOPATH/src/k8s.io/kubernetes,执行以下命令开始构建镜像:
KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images
  • 根据build/root/Makefile中的描述,KUBE_BUILD_CONFORMANCE参数用来控制是否创建一致性测试镜像,KUBE_BUILD_HYPERKUBE控制是否创建hyperkube镜像(各种工具集成在一起),这两个目前都用不上,因此是设置为"n"表示不构建;
  • 大约10多分钟后,镜像构建成功,控制台输出如下:
[root@hedy kubernetes]# KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images
+++ [0316 19:11:40] Verifying Prerequisites....
+++ [0316 19:11:40] Building Docker image kube-build:build-b58720d1c7-5-v1.11.5-1
+++ [0316 19:15:46] Creating data container kube-build-data-b58720d1c7-5-v1.11.5-1
+++ [0316 19:17:02] Syncing sources to container
+++ [0316 19:17:11] Running build command...
+++ [0316 19:17:21] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0316 19:17:28] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0316 19:17:34] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0316 19:17:43] Building go targets for linux/amd64:
    ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
2019/03/16 19:17:51 Code for OpenAPI definitions generated
+++ [0316 19:17:52] Building go targets for linux/amd64:
    ./vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0316 19:17:53] Building go targets for linux/amd64:
    cmd/cloud-controller-manager
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/kube-scheduler
    cmd/kube-proxy
+++ [0316 19:20:41] Syncing out of container
+++ [0316 19:20:55] Building images: linux-amd64
+++ [0316 19:20:56] Starting docker build for image: cloud-controller-manager-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-apiserver-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-controller-manager-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-scheduler-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-proxy-amd64
+++ [0316 19:21:37] Deleting docker image k8s.gcr.io/kube-proxy:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:41] Deleting docker image k8s.gcr.io/kube-scheduler:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:44] Deleting docker image k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:48] Docker builds done
  • 在目录下可见构建的tar文件,可以通过docker load命令加载到本地镜像仓库使用:
[root@hedy amd64]# cd $GOPATH/src/k8s.io/kubernetes/_output/release-images/amd64
[root@hedy amd64]# ls
cloud-controller-manager.tar  kube-apiserver.tar  kube-controller-manager.tar  kube-proxy.tar  kube-scheduler.tar
  • 将新生成的kube-apiserver.tar上传到kubernetes环境的master节点;
  • 执行命令docker load < kube-apiserver.tar,将文件kube-apiserver.tar导入本地镜像仓库;
  • 执行命令docker images,如下所示,可见本地仓库多了个TAG为v1.13.5-beta.0.7_6c1e64b94a3e11-dirty的kube-apiserver镜像:
[root@master 16]# docker load < kube-apiserver.tar
efd6f8f1a8c2: Loading layer [==================================================>]  138.5MB/138.5MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
[root@master 16]# docker images
REPOSITORY                           TAG                                     IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.5-beta.0.7_6c1e64b94a3e11-dirty   c9482a699ba7        About an hour ago   181MB
quay.io/coreos/flannel               v0.11.0-amd64                           ff281650a721        6 weeks ago         52.6MB
k8s.gcr.io/kube-proxy                v1.13.0                                 8fa56d18961f        3 months ago        80.2MB
k8s.gcr.io/kube-scheduler            v1.13.0                                 9508b7d8008d        3 months ago        79.6MB
k8s.gcr.io/kube-controller-manager   v1.13.0                                 d82530ead066        3 months ago        146MB
k8s.gcr.io/kube-apiserver            v1.13.0                                 f1ff9b7e3d6e        3 months ago        181MB
k8s.gcr.io/coredns                   1.2.6                                   f59dcacceff4        4 months ago        40MB
k8s.gcr.io/etcd                      3.2.24                                  3cab8e1b9802        5 months ago        220MB
k8s.gcr.io/pause                     3.1                                     da86e6ba6ca1        15 months ago       742kB
  • 先看看当前的api-server这个Pod的基本情况,命令是kubectl describe pod kube-apiserver-master -n kube-system,如下所示,当前的镜像是k8s.gcr.io/kube-apiserver:v1.13.0
[root@master 16]# kubectl describe pod kube-apiserver-master -n kube-system
Name:               kube-apiserver-master
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               master/192.168.182.130
Start Time:         Sat, 16 Mar 2019 21:53:22 +0800
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash: 38da173e77f3fd0c39712abbb79b5529
                    kubernetes.io/config.mirror: 38da173e77f3fd0c39712abbb79b5529
                    kubernetes.io/config.seen: 2019-02-23T13:46:43.135821321+08:00
                    kubernetes.io/config.source: file
                    scheduler.alpha.kubernetes.io/critical-pod: 
Status:             Running
IP:                 192.168.182.130
Containers:
  kube-apiserver:
    Container ID:  docker://cb0234269ee2fbef23078cc1bbf6a2d6edd4b248cb733f793853dbfec2f0d814
    Image:         k8s.gcr.io/kube-apiserver:v1.13.0
  • 修改文件/etc/kubernetes/manifests/kube-apiserver.yaml,修改完毕后,执行命令kubectl apply -f kube-apiserver.yaml使修改生效;

验证源码修改是否生效

  • 执行命令kubectl logs -f kube-apiserver-master -n kube-system查看Pod的日志,内容如下,可见请求的详细信息已经打印出来了,证明之前修改的代码已经生效,这是个系统事件对象的创建请求:
***********************************************************************************************
start create &{POST /api/v1/namespaces/kube-system/events HTTP/2.0 2 0 map[Accept:[application/vnd.kubernetes.protobuf, */*] Content-Type:[application/vnd.kubernetes.protobuf] User-Agent:[kubelet/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[359] Accept-Encoding:[gzip]] 0xc00ccd0870 <nil> 359 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.131:58558 /api/v1/namespaces/kube-system/events 0xc00908cf20 <nil> <nil> 0xc00ccd0990}
-----------------------------------------------------------------------------------------------
goroutine 49344 [running]:
runtime/debug.Stack(0xc007076760, 0x1, 0x1)
    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc00b83ce88, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc00ccd09f0, 0xc0087d4ae0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc00ccd09f0, 0xc0087d4ae0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20d
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3
net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ff
net/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eeb
net/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456
net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005f2ccc0, 0xc008f1c2e0, 0x5db4f80, 0xc00b83ce78, 0xc00bb46a00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

***********************************************************************************************
  • 接下来我们自己创建个rc资源试试,新开一个控制台窗口连接Kubernetes的master,输入以下命令创建一个名为nginx-rc.yaml的文件,内容是nginx的rc:
tee nginx-rc.yaml <<-'EOF'
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          imagePullPolicy: Never
          ports:
            - containerPort: 80
EOF
  • 在nginx-rc.yaml所在目录执行命令kubectl apply -f nginx-rc.yaml,即可创建资源;
  • 在输出api-server日志的窗口可见如下内容,就是我们刚刚创建的rc资源:
***********************************************************************************************
start create &{POST /api/v1/namespaces/default/replicationcontrollers HTTP/2.0 2 0 map[Accept:[application/json] Content-Type:[application/json] User-Agent:[kubectl/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[818] Accept-Encoding:[gzip]] 0xc004b4dfb0 <nil> 818 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.130:57856 /api/v1/namespaces/default/replicationcontrollers 0xc007b83600 <nil> <nil> 0xc004bc40f0}
-----------------------------------------------------------------------------------------------
goroutine 133183 [running]:
runtime/debug.Stack(0xc00a08c760, 0x1, 0x1)
    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc006e07e58, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc004bc4150, 0xc00a435680)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc004bc4150, 0xc00a435680)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20d
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3
net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ff
net/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eeb
net/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0000)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456
net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0000)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a28ae40, 0xc008f1c2e0, 0x5db4f80, 0xc006e07e48, 0xc002cc0000)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

***********************************************************************************************
  • 至此,Kubernetes的镜像的源码的修改、构建、运行实战就全部完成了,在学习源码的过程中如果遇到了有兴趣或有疑惑的代码,您不妨也尝试一下;

欢迎关注阿里云开发者社区博客:程序员欣宸

学习路上,你不孤单,欣宸原创一路相伴...
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
135 60
|
2月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
268 62
|
23天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
29天前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
48 0
|
2月前
|
Kubernetes 安全 Cloud Native
云上攻防-云原生篇&K8s安全-Kubelet未授权访问、API Server未授权访问
本文介绍了云原生环境下Kubernetes集群的安全问题及攻击方法。首先概述了云环境下的新型攻击路径,如通过虚拟机攻击云管理平台、容器逃逸控制宿主机等。接着详细解释了Kubernetes集群架构,并列举了常见组件的默认端口及其安全隐患。文章通过具体案例演示了API Server 8080和6443端口未授权访问的攻击过程,以及Kubelet 10250端口未授权访问的利用方法,展示了如何通过这些漏洞实现权限提升和横向渗透。
241 0
云上攻防-云原生篇&K8s安全-Kubelet未授权访问、API Server未授权访问
|
2月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
91 3
|
2月前
|
NoSQL 关系型数据库 Redis
高可用和性能:基于ACK部署Dify的最佳实践
本文介绍了基于阿里云容器服务ACK,部署高可用、可伸缩且具备高SLA的生产可用的Dify服务的详细解决方案。
|
7天前
|
人工智能 自然语言处理 API
Multimodal Live API:谷歌推出新的 AI 接口,支持多模态交互和低延迟实时互动
谷歌推出的Multimodal Live API是一个支持多模态交互、低延迟实时互动的AI接口,能够处理文本、音频和视频输入,提供自然流畅的对话体验,适用于多种应用场景。
45 3
Multimodal Live API:谷歌推出新的 AI 接口,支持多模态交互和低延迟实时互动
|
2天前
|
前端开发 API 数据库
Next 编写接口api
Next 编写接口api
|
8天前
|
XML JSON 缓存
阿里巴巴商品详情数据接口(alibaba.item_get) 丨阿里巴巴 API 实时接口指南
阿里巴巴商品详情数据接口(alibaba.item_get)允许商家通过API获取商品的详细信息,包括标题、描述、价格、销量、评价等。主要参数为商品ID(num_iid),支持多种返回数据格式,如json、xml等,便于开发者根据需求选择。使用前需注册并获得App Key与App Secret,注意遵守使用规范。