使用 kubevpn 在本地快速开发云原生应用

简介: KubeVPN 是一个用于云原生开发的工具,它允许用户通过本地计算机直接访问远程 Kubernetes 集群中的服务,利用 k8s DNS 或 Pod IP/Service IP。它可以拦截并调试服务网格中的工作负载流量,并提供开发模式,让容器在本地以与 k8s pod 相同的环境运行。快速开始包括下载二进制文件、自定义 Krew 安装、构建二进制文件以及安装示例应用。KubeVPN 支持链接到多个集群、DNS 解析、反向代理,以及在 Docker 中的开发模式,确保与 Kubernetes 运行环境一致。此外,它还兼容多种协议和平台。

kubevpn

KubeVPN

维基

KubeVPN 是一个云原生开发工具。通过连接云端 kubernetes 网络,可以在本地使用 k8s dns 或者 Pod IP / Service IP
直接访问远端集群中的服务。拦截远端集群中的工作负载的入流量到本地电脑,配合服务网格便于调试及开发。同时还可以使用开发模式,直接在本地使用 Docker
模拟 k8s pod runtime 将容器运行在本地 (具有相同的环境变量,磁盘和网络)。

快速开始

从 Github release 下载编译好的二进制文件

链接

从 自定义 Krew 仓库安装

(
  kubectl krew index add kubevpn https://github.com/kubenetworks/kubevpn.git && \
  kubectl krew install kubevpn/kubevpn && kubectl kubevpn 
) 

自己构建二进制文件

(
  git clone https://github.com/kubenetworks/kubevpn.git && \
  cd kubevpn && make kubevpn && ./bin/kubevpn
)

安装 bookinfo 作为 demo 应用

kubectl apply -f https://raw.githubusercontent.com/kubenetworks/kubevpn/master/samples/bookinfo.yaml

功能

链接到集群网络

➜  ~ kubevpn connect
Password:
start to connect
get cidr from cluster info...
get cidr from cluster info ok
get cidr from cni...
wait pod cni-net-dir-kubevpn to be running timeout, reason , ignore
get cidr from svc...
get cidr from svc ok
get cidr successfully
traffic manager not exist, try to create it...
label namespace default
create serviceAccount kubevpn-traffic-manager
create roles kubevpn-traffic-manager
create roleBinding kubevpn-traffic-manager
create service kubevpn-traffic-manager
create deployment kubevpn-traffic-manager
pod kubevpn-traffic-manager-66d969fd45-9zlbp is Pending
Container     Reason            Message
control-plane ContainerCreating
vpn           ContainerCreating
webhook       ContainerCreating

pod kubevpn-traffic-manager-66d969fd45-9zlbp is Running
Container     Reason           Message
control-plane ContainerRunning
vpn           ContainerRunning
webhook       ContainerRunning

Creating mutatingWebhook_configuration for kubevpn-traffic-manager
update ref count successfully
port forward ready
tunnel connected
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
➜  ~
➜  ~ kubectl get pods -o wide
NAME                                       READY   STATUS             RESTARTS   AGE     IP                NODE              NOMINATED NODE   READINESS GATES
authors-dbb57d856-mbgqk                    3/3     Running            0          7d23h   172.29.2.132      192.168.0.5       <none>           <none>
details-7d8b5f6bcf-hcl4t                   1/1     Running            0          61d     172.29.0.77       192.168.104.255   <none>           <none>
kubevpn-traffic-manager-66d969fd45-9zlbp   3/3     Running            0          74s     172.29.2.136      192.168.0.5       <none>           <none>
productpage-788df7ff7f-jpkcs               1/1     Running            0          61d     172.29.2.134      192.168.0.5       <none>           <none>
ratings-77b6cd4499-zvl6c                   1/1     Running            0          61d     172.29.0.86       192.168.104.255   <none>           <none>
reviews-85c88894d9-vgkxd                   1/1     Running            0          24d     172.29.2.249      192.168.0.5       <none>           <none>
➜  ~ ping 172.29.2.134
PING 172.29.2.134 (172.29.2.134): 56 data bytes
64 bytes from 172.29.2.134: icmp_seq=0 ttl=63 time=55.727 ms
64 bytes from 172.29.2.134: icmp_seq=1 ttl=63 time=56.270 ms
64 bytes from 172.29.2.134: icmp_seq=2 ttl=63 time=55.228 ms
64 bytes from 172.29.2.134: icmp_seq=3 ttl=63 time=54.293 ms
^C
--- 172.29.2.134 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 54.293/55.380/56.270/0.728 ms
➜  ~ kubectl get services -o wide
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                              AGE     SELECTOR
authors                   ClusterIP   172.21.5.160    <none>        9080/TCP                             114d    app=authors
details                   ClusterIP   172.21.6.183    <none>        9080/TCP                             114d    app=details
kubernetes                ClusterIP   172.21.0.1      <none>        443/TCP                              319d    <none>
kubevpn-traffic-manager   ClusterIP   172.21.2.86     <none>        8422/UDP,10800/TCP,9002/TCP,80/TCP   2m28s   app=kubevpn-traffic-manager
productpage               ClusterIP   172.21.10.49    <none>        9080/TCP                             114d    app=productpage
ratings                   ClusterIP   172.21.3.247    <none>        9080/TCP                             114d    app=ratings
reviews                   ClusterIP   172.21.8.24     <none>        9080/TCP                             114d    app=reviews
➜  ~ curl 172.21.10.49:9080
<!DOCTYPE html>
<html>
  <head>
    <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">

域名解析功能

➜  ~ curl productpage.default.svc.cluster.local:9080
<!DOCTYPE html>
<html>
  <head>
    <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">

短域名解析功能

➜  ~ curl productpage:9080
<!DOCTYPE html>
<html>
  <head>
    <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
...

链接到多集群网络

➜  ~ kubevpn status
ID Mode Cluster               Kubeconfig                 Namespace Status
0  full ccijorbccotmqodvr189g /Users/naison/.kube/config default   Connected
➜  ~ kubevpn connect -n default --kubeconfig ~/.kube/dev_config --lite
start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
adding route...
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
➜  ~ kubevpn status
ID Mode Cluster               Kubeconfig                     Namespace Status
0  full ccijorbccotmqodvr189g /Users/naison/.kube/config     default   Connected
1  lite ccidd77aam2dtnc3qnddg /Users/naison/.kube/dev_config default   Connected
➜  ~

反向代理

➜  ~ kubevpn proxy deployment/productpage
already connect to cluster
start to create remote inbound pod for deployment/productpage
workload default/deployment/productpage is controlled by a controller
rollout status for deployment/productpage
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out
rollout status for deployment/productpage successfully
create remote inbound pod for deployment/productpage successfully
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
➜  ~
package main

import (
    "io"
    "net/http"
)

func main() {
    http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
        _, _ = io.WriteString(writer, "Hello world!")
    })
    _ = http.ListenAndServe(":9080", nil)
}
➜  ~ curl productpage:9080
Hello world!%
➜  ~ curl productpage.default.svc.cluster.local:9080
Hello world!%

反向代理支持 service mesh

支持 HTTP, GRPC 和 WebSocket 等, 携带了指定 header "a: 1" 的流量,将会路由到本地

➜  ~ kubevpn proxy deployment/productpage --headers a=1
already connect to cluster
start to create remote inbound pod for deployment/productpage
patch workload default/deployment/productpage with sidecar
rollout status for deployment/productpage
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out
rollout status for deployment/productpage successfully
create remote inbound pod for deployment/productpage successfully
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
➜  ~
➜  ~ curl productpage:9080
<!DOCTYPE html>
<html>
  <head>
    <title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
...
➜  ~ curl productpage:9080 -H "a: 1"
Hello world!%

如果你需要取消代理流量,可以执行如下命令:

➜  ~ kubevpn leave deployments/productpage
leave workload deployments/productpage
workload default/deployments/productpage is controlled by a controller
leave workload deployments/productpage successfully

本地进入开发模式 🐳

将 Kubernetes pod 运行在本地的 Docker 容器中,同时配合 service mesh, 拦截带有指定 header 的流量到本地,或者所有的流量到本地。这个开发模式依赖于本地 Docker。

➜  ~ kubevpn dev deployment/authors --headers a=1 -it --rm --entrypoint sh
connectting to cluster
start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4563987760170736212:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/4044542168121221027:/var/run/secrets/kubernetes.io/serviceaccount
create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
Created container: nginx_default_kubevpn_a9a22
Wait container nginx_default_kubevpn_a9a22 to be running...
Container nginx_default_kubevpn_a9a22 is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Created main container: authors_default_kubevpn_a9a22
/opt/microservices # ls
app
/opt/microservices # ps -ef
PID   USER     TIME  COMMAND
    1 root      0:00 nginx: master process nginx -g daemon off;
   29 101       0:00 nginx: worker process
   30 101       0:00 nginx: worker process
   31 101       0:00 nginx: worker process
   32 101       0:00 nginx: worker process
   33 101       0:00 nginx: worker process
   34 root      0:00 {sh} /usr/bin/qemu-x86_64 /bin/sh sh
   44 root      0:00 ps -ef
/opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (8.0.1-r0)
(4/4) Installing curl (8.0.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packages
/opt/microservices # ./app &
/opt/microservices # 2023/09/30 13:41:58 Start listening http port 9080 ...

/opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up
update ref count successfully
tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully
prepare to exit, cleaning up
update ref count successfully
clean up successfully
➜  ~

此时本地会启动两个 container, 对应 pod 容器中的两个 container, 并且共享端口, 可以直接使用 localhost:port 的形式直接访问另一个 container,
并且, 所有的环境变量、挂载卷、网络条件都和 pod 一样, 真正做到与 kubernetes 运行环境一致。

➜  ~ docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED          STATUS          PORTS                                                                NAMES
afdecf41c08d   naison/authors:latest           "sh"                     37 seconds ago   Up 36 seconds                                                                        authors_default_kubevpn_a9a22
fc04e42799a5   nginx:latest                    "/docker-entrypoint.…"   37 seconds ago   Up 37 seconds   0.0.0.0:80->80/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:9080->9080/tcp   nginx_default_kubevpn_a9a22
➜  ~

如果你只是想在本地启动镜像,可以用一种简单的方式:

kubevpn dev deployment/authors --no-proxy -it --rm

例如:

➜  ~ kubevpn dev deployment/authors --no-proxy -it --rm
connectting to cluster
start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
dns service ok
tar: removing leading '/' from member names
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/5631078868924498209:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/1548572512863475037:/var/run/secrets/kubernetes.io/serviceaccount
create docker network 56c25058d4b7498d02c2c2386ccd1b2b127cb02e8a1918d6d24bffd18570200e
Created container: nginx_default_kubevpn_ff34b
Wait container nginx_default_kubevpn_ff34b to be running...
Container nginx_default_kubevpn_ff34b is running on port 80/tcp:80 8888/tcp:8888 9080/tcp:9080 now
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Created main container: authors_default_kubevpn_ff34b
2023/09/30 14:02:31 Start listening http port 9080 ...

此时程序会挂起,默认为显示日志

如果你想指定在本地启动容器的镜像, 可以使用参数 --docker-image, 当本地不存在该镜像时, 会从对应的镜像仓库拉取。如果你想指定启动参数,可以使用 --entrypoint
参数,替换为你想要执行的命令,比如 --entrypoint /bin/bash, 更多使用参数,请参见 kubevpn dev --help.

DinD ( Docker in Docker ) 在 Docker 中使用 kubevpn

如果你想在本地使用 Docker in Docker (DinD) 的方式启动开发模式, 由于程序会读写 /tmp 目录,您需要手动添加参数 -v /tmp:/tmp, 还有一点需要注意, 如果使用 DinD
模式,为了共享容器网络和 pid, 还需要指定参数 --network

例如:

docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
➜  ~ docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/vke:/root/.kube/config --platform linux/amd64 naison/kubevpn:v2.0.0
Unable to find image 'naison/kubevpn:v2.0.0' locally
v2.0.0: Pulling from naison/kubevpn
445a6a12be2b: Already exists
bd6c670dd834: Pull complete
64a7297475a2: Pull complete
33fa2e3224db: Pull complete
e008f553422a: Pull complete
5132e0110ddc: Pull complete
5b2243de1f1a: Pull complete
662a712db21d: Pull complete
4f4fb700ef54: Pull complete
33f0298d1d4f: Pull complete
Digest: sha256:115b975a97edd0b41ce7a0bc1d8428e6b8569c91a72fe31ea0bada63c685742e
Status: Downloaded newer image for naison/kubevpn:v2.0.0
root@d0b3dab8912a:/app# kubevpn dev deployment/authors --headers user=naison -it --entrypoint sh

----------------------------------------------------------------------------------
    Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.
    Because of sudo user env and user env are different.
    Current env KUBECONFIG value:
----------------------------------------------------------------------------------

hostname is d0b3dab8912a
connectting to cluster
start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
dns service ok
start to create remote inbound pod for Deployment.apps/authors
patch workload default/Deployment.apps/authors with sidecar
rollout status for Deployment.apps/authors
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
rollout status for Deployment.apps/authors successfully
create remote inbound pod for Deployment.apps/authors successfully
tar: removing leading '/' from member names
/tmp/6460902982794789917:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/tmp/5028895788722532426:/var/run/secrets/kubernetes.io/serviceaccount
network mode is container:d0b3dab8912a
Created container: nginx_default_kubevpn_6df63
Wait container nginx_default_kubevpn_6df63 to be running...
Container nginx_default_kubevpn_6df63 is running now
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Created main container: authors_default_kubevpn_6df5f
/opt/microservices # ps -ef
PID   USER     TIME  COMMAND
    1 root      0:00 {bash} /usr/bin/qemu-x86_64 /bin/bash /bin/bash
   14 root      0:02 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn kubevpn dev deployment/authors --headers
   25 root      0:01 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon
   37 root      0:04 {kubevpn} /usr/bin/qemu-x86_64 /usr/local/bin/kubevpn /usr/local/bin/kubevpn daemon --sudo
   53 root      0:00 nginx: master process nginx -g daemon off;
(4/4) Installing curl (8.0.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packagesnx: worker process
/opt/microservices #
/opt/microservices # apk add curl
OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/opt/microservices # ls
app
/opt/microservices # ls -alh
total 6M
drwxr-xr-x    2 root     root        4.0K Oct 18  2021 .
drwxr-xr-x    1 root     root        4.0K Oct 18  2021 ..
-rwxr-xr-x    1 root     root        6.3M Oct 18  2021 app
/opt/microservices # ./app &
/opt/microservices # 2023/09/30 14:27:32 Start listening http port 9080 ...

/opt/microservices # curl authors:9080/health
/opt/microservices # curl authors:9080/health
{"status":"Authors is healthy"}/opt/microservices #
/opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up
update ref count successfully
tun device closed
leave resource: deployments.apps/authors
workload default/deployments.apps/authors is controlled by a controller
leave resource: deployments.apps/authors successfully
clean up successfully
prepare to exit, cleaning up
update ref count successfully
clean up successfully
root@d0b3dab8912a:/app# exit
exit
➜  ~
➜  ~ docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED         STATUS         PORTS     NAMES
1cd576b51b66   naison/authors:latest           "sh"                     4 minutes ago   Up 4 minutes             authors_default_kubevpn_6df5f
56a6793df82d   nginx:latest                    "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes             nginx_default_kubevpn_6df63
d0b3dab8912a   naison/kubevpn:v2.0.0     "/bin/bash"              5 minutes ago   Up 5 minutes             upbeat_noyce
➜  ~

支持多种协议

  • TCP
  • UDP
  • ICMP
  • GRPC
  • WebSocket
  • HTTP
  • ...

支持三大平台

  • macOS
  • Linux
  • Windows

问答

1,依赖的镜像拉不下来,或者内网环境无法访问 docker.io 怎么办?

答:有两种方法可以解决

  • 第一种,在可以访问 docker.io 的网络中,将命令 kubevpn version 中的 image 镜像, 转存到自己的私有镜像仓库,然后启动命令的时候,加上 --image 新镜像 即可。
    例如:
➜  ~ kubevpn version
KubeVPN: CLI
    Version: v2.0.0
    DaemonVersion: v2.0.0
    Image: docker.io/naison/kubevpn:v2.0.0
    Branch: feature/daemon
    Git commit: 7c3a87e14e05c238d8fb23548f95fa1dd6e96936
    Built time: 2023-09-30 22:01:51
    Built OS/Arch: darwin/arm64
    Built Go version: go1.20.5

镜像是 docker.io/naison/kubevpn:v2.0.0,将此镜像转存到自己的镜像仓库。

docker pull docker.io/naison/kubevpn:v2.0.0
docker tag docker.io/naison/kubevpn:v2.0.0 [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号]
docker push [镜像仓库地址]/[命名空间]/[镜像仓库]:[镜像版本号]

然后就可以使用这个镜像了,如下:

➜  ~ kubevpn connect --image [docker registry]/[namespace]/[repo]:[tag]
got cidr from cache
traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Running
...
  • 第二种,使用选项 --transfer-image, 这个选项将会自动转存镜像到选项 --image 指定的地址。
    例如:
➜  ~ kubevpn connect --transfer-image --image nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn:v2.0.0
v2.0.0: Pulling from naison/kubevpn
Digest: sha256:450446850891eb71925c54a2fab5edb903d71103b485d6a4a16212d25091b5f4
Status: Image is up to date for naison/kubevpn:v2.0.0
The push refers to repository [nocalhost-team-docker.pkg.coding.net/nocalhost/public/kubevpn]
ecc065754c15: Preparing
f2b6c07cb397: Pushed
448eaa16d666: Pushed
f5507edfc283: Pushed
3b6ea9aa4889: Pushed
ecc065754c15: Pushed
feda785382bb: Pushed
v2.0.0: digest: sha256:85d29ebb53af7d95b9137f8e743d49cbc16eff1cdb9983128ab6e46e0c25892c size: 2000
start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
➜  ~

2,在使用 kubevpn dev 进入开发模式的时候,有出现报错 137, 改怎么解决 ?

dns service ok
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/var/folders/30/cmv9c_5j3mq_kthx63sb1t5c0000gn/T/7375606548554947868:/var/run/secrets/kubernetes.io/serviceaccount
Created container: server_vke-system_kubevpn_0db84
Wait container server_vke-system_kubevpn_0db84 to be running...
Container server_vke-system_kubevpn_0db84 is running on port 8888/tcp: 6789/tcp:6789 now
$ Status: , Code: 137
prepare to exit, cleaning up
port-forward occurs error, err: lost connection to pod, retrying
update ref count successfully
ref-count is zero, prepare to clean up resource
clean up successfully

这是因为你的 Docker-desktop 声明的资源, 小于 container 容器启动时所需要的资源, 因此被 OOM 杀掉了, 你可以增加 Docker-desktop 对于 resources
的设置, 目录是:Preferences --> Resources --> Memory

3,使用 WSL( Windows Sub Linux ) Docker, 用命令 kubevpn dev 进入开发模式的时候, 在 terminal 中无法提示链接集群网络, 这是为什么, 如何解决?

答案: 这是因为 WSL 的 Docker 使用的是 主机 Windows 的网络, 所以即便在 WSL 中启动 container, 这个 container 不会使用 WSL 的网络,而是使用 Windows 的网络。
解决方案:

  • 1): 在 WSL 中安装 Docker, 不要使用 Windows 版本的 Docker-desktop
  • 2): 在主机 Windows 使用命令 kubevpn connect, 然后在 WSL 中使用 kubevpn dev 进入开发模式
  • 3): 在主机 Windows 上启动一个 container,在 container 中使用命令 kubevpn connect, 然后在 WSL
    中使用 kubevpn dev --network container:$CONTAINER_ID

4,在使用 kubevpn dev 进入开发模式后,无法访问容器网络,出现错误 172.17.0.1:443 connect refusued,该如何解决?

答案:大概率是因为 k8s 容器网络和 docker 网络网段冲突了。

解决方案:

  • 使用参数 --connect-mode container 在容器中链接,也可以解决此问题
  • 可以修改文件 ~/.docker/daemon.json 增加不冲突的网络,例如 "bip": "172.15.0.1/24".
➜  ~ cat ~/.docker/daemon.json
{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "insecure-registries": [
  ],
}

增加不冲突的网段

➜  ~ cat ~/.docker/daemon.json
{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "insecure-registries": [
  ],
  "bip": "172.15.0.1/24"
}

重启 docker,重新操作即可

相关实践学习
通过容器镜像仓库与容器服务快速部署spring-hello应用
本教程主要讲述如何将本地Java代码程序上传并在云端以容器化的构建、传输和运行。
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
6天前
|
Cloud Native 持续交付 开发者
云原生技术在现代软件开发中的应用
【9月更文挑战第4天】本文将探讨云原生技术如何改变现代软件开发的格局。通过深入分析容器化、微服务架构和持续集成/持续部署(CI/CD)等关键技术,本文揭示了云原生技术如何促进软件的快速迭代、提高开发效率和确保系统的可扩展性与可靠性。同时,文章还将讨论这些技术对软件开发人员技能要求的影响,以及它们如何塑造企业技术战略和市场竞争力。
|
14天前
|
Cloud Native 安全 网络安全
云计算与网络安全:技术融合与挑战云原生技术在现代软件开发中的应用
【8月更文挑战第28天】在数字时代的浪潮中,云计算和网络安全成为信息技术领域的两大支柱。本文将探讨云计算服务的分类、特点及其面临的安全威胁,分析网络安全的基本概念、重要性以及信息安全的关键要素。同时,文章将深入讨论云计算环境下的网络安全问题,包括数据保护、访问控制和合规性挑战,并提出相应的解决策略和技术措施。最后,通过一个代码示例,展示如何在云计算环境中实现基本的数据加密,以增强信息的安全性。 【8月更文挑战第28天】 随着云计算技术的飞速发展,云原生技术已成为推动软件行业创新的关键力量。本文将深入探讨云原生的核心概念、优势以及如何在现代软件开发中有效利用云原生技术。我们将通过具体案例,展示
|
1天前
|
运维 Cloud Native 持续交付
云原生技术在现代企业中的应用与挑战
随着云计算的不断成熟和普及,云原生技术已经成为推动企业数字化转型的重要力量。本文将深入探讨云原生技术的核心概念、优势及其在现代企业中的具体应用案例,同时分析在实践过程中可能遇到的挑战和解决策略,旨在为读者提供一套全面理解并有效利用云原生技术的框架。
20 8
|
1天前
|
Cloud Native 安全 持续交付
云原生技术在现代企业中的应用与挑战
随着云计算技术的不断演进,云原生技术已成为推动企业数字化转型的重要力量。本文将深入探讨云原生技术的核心概念、优势及其在现代企业中的实际应用案例,同时分析企业在采用云原生技术过程中可能遇到的挑战和解决方案。通过具体实例,揭示云原生技术如何助力企业实现更高效、灵活的运营模式。
|
2天前
|
Cloud Native 测试技术 持续交付
云原生技术:构建现代应用的基石
在数字化转型的浪潮中,云原生技术如同一艘承载梦想的巨轮,引领企业乘风破浪。本文将深入浅出地探讨云原生的核心概念、关键技术和实践方法,旨在为读者揭开云原生的神秘面纱,展现其在现代应用构建中的强大魅力。让我们一起踏上这场云原生之旅,感受技术变革带来的无限可能。
19 7
|
10天前
|
Cloud Native 安全 云计算
云原生技术在现代软件开发中的应用与挑战
【8月更文挑战第33天】随着云计算技术的飞速发展,云原生(Cloud-Native)已经成为推动现代软件开发和运维的关键因素。本文将探讨云原生的核心概念、优势以及在实际应用中面临的挑战。我们将通过具体案例分析,了解云原生如何帮助企业实现更高效、灵活的软件开发流程,并讨论如何在采纳云原生技术时克服常见的技术和管理障碍。
|
3天前
|
Cloud Native 持续交付 云计算
云原生之旅:从传统应用到容器化微服务
随着数字化转型的浪潮不断推进,企业对IT系统的要求日益提高。本文将引导你了解如何将传统应用转变为云原生架构,重点介绍容器化和微服务的概念、优势以及实施步骤,旨在帮助读者掌握将应用迁移到云平台的关键技巧,确保在云计算时代保持竞争力。
14 5
|
9天前
|
Cloud Native 关系型数据库 Serverless
基于阿里云函数计算(FC)x 云原生 API 网关构建生产级别 LLM Chat 应用方案最佳实践
本文带大家了解一下如何使用阿里云Serverless计算产品函数计算构建生产级别的LLM Chat应用。该最佳实践会指导大家基于开源WebChat组件LobeChat和阿里云函数计算(FC)构建企业生产级别LLM Chat应用。实现同一个WebChat中既可以支持自定义的Agent,也支持基于Ollama部署的开源模型场景。
|
13天前
|
Cloud Native 持续交付 Docker
云原生入门指南:构建你的首个容器化应用
【8月更文挑战第30天】云原生技术,作为现代软件开发的风向标,正在改变我们构建、部署和管理应用程序的方式。本篇文章将引导你了解云原生的核心概念,并通过一个简单的代码示例,展示如何将传统应用转变为容器化的云原生应用。无论你是新手开发者还是希望扩展知识的IT专业人士,这篇文章都将是你探索云原生世界的起点。
|
14天前
|
Kubernetes Cloud Native 持续交付
云原生技术在现代企业的应用与挑战
【8月更文挑战第28天】随着云计算的飞速发展,云原生技术成为推动企业数字化转型的关键力量。本文深入探讨了云原生技术的核心概念、主要优势以及在实际业务中的应用案例。同时,文章也指出了企业在采纳云原生技术过程中可能遇到的挑战,并提出了相应的解决策略。通过具体代码示例,本文旨在为企业决策者和技术人员提供云原生转型的实践指南。