Istio服务质量上篇

简介: 关于Istio服务网格在服务质量方面的应用,包括流量注入、流量管理和流量治理的详细教程。

作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.流量注入

1.Istio注入原理图解

如上图所示,在正常情况下,我们单独运行一个Pod,其并没有其他的容器运行。

但是我们使用Istio向这个pod注入时会注入两个容器,一个是启动时负责网络的初始化工作,另一个是负责外界通信的sidecar容器。

注入后,最后只会保留原有的业务容器和一个名为Istio-proxy的容器运行。具体流程工作流程如下图所示。

温馨提示:
    Istio在注入的时候会删除原有的Pod,以便于创建新的Pod更新资源清单,与此同时,标签也会随之变化。

2.手动注入案例

    1.创建测试Pod
[root@master241 ~]# cat deploy-apps.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: yinzhengjie

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-apps
  namespace: yinzhengjie
spec:
  replicas: 3
  selector:
    matchLabels:
      app: v1
  template:
    metadata:
      labels:
        app: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        ports:
        - containerPort: 80
[root@master241 ~]# 
[root@master241 ~]# kubectl get pods -n yinzhengjie 
NAME                           READY   STATUS    RESTARTS   AGE
deploy-apps-5f45c6f4b4-5vhb4   1/1     Running   0          10s
deploy-apps-5f45c6f4b4-9sj4p   1/1     Running   0          10s
deploy-apps-5f45c6f4b4-9tmkm   1/1     Running   0          10s
[root@master241 ~]# 
[root@master241 ~]# kubectl apply -f deploy-apps.yaml 
namespace/yinzhengjie created
deployment.apps/deploy-apps created
[root@master241 ~]# 


    2.手动注入Pod
[root@master241 ~]# kubectl get pods -n yinzhengjie --show-labels
NAME                           READY   STATUS    RESTARTS   AGE   LABELS
deploy-apps-5f45c6f4b4-5vhb4   1/1     Running   0          12s   app=v1,pod-template-hash=5f45c6f4b4
deploy-apps-5f45c6f4b4-9sj4p   1/1     Running   0          12s   app=v1,pod-template-hash=5f45c6f4b4
deploy-apps-5f45c6f4b4-9tmkm   1/1     Running   0          12s   app=v1,pod-template-hash=5f45c6f4b4
[root@master241 ~]# 
[root@master241 ~]# istioctl kube-inject -f deploy-apps.yaml | kubectl -n yinzhengjie  apply -f -
namespace/yinzhengjie unchanged
deployment.apps/deploy-apps configured
[root@master241 ~]# 
[root@master241 ~]# kubectl get pods -n yinzhengjie --show-labels
NAME                           READY   STATUS    RESTARTS   AGE   LABELS
deploy-apps-548b56cf95-94lrq   2/2     Running   0          49s   app=v1,pod-template-hash=548b56cf95,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=v1,service.istio.io/canonical-revision=latest
deploy-apps-548b56cf95-mhg44   2/2     Running   0          54s   app=v1,pod-template-hash=548b56cf95,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=v1,service.istio.io/canonical-revision=latest
deploy-apps-548b56cf95-rlr77   2/2     Running   0          44s   app=v1,pod-template-hash=548b56cf95,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=v1,service.istio.io/canonical-revision=latest
[root@master241 ~]#

3.原理剖析细节部分

    1.查看注入后的资源清单,我直截取了关键字段,大家可以看到多出来了相关的配置
[root@master241 ~]# kubectl -n yinzhengjie get pods deploy-apps-548b56cf95-rlr77 -o yaml
apiVersion: v1
kind: Pod
metadata:
  ...
  name: deploy-apps-548b56cf95-rlr77
  namespace: yinzhengjie
  ...
spec:
  containers:
  - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    name: c1
    ...
  - args:
    ...
    image: docker.io/istio/proxyv2:1.17.8
    name: istio-proxy
    ...
  initContainers:
  - args:
    - istio-iptables
    - -p
    - "15001"
    - -z
    - "15006"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - '*'
    - -x
    - ""
    - -b
    - '*'
    - -d
    - 15090,15021,15020
    - --log_output_level=default:info
    image: docker.io/istio/proxyv2:1.17.8
    name: istio-init
    ...
  ...
status:
  ...
[root@master241 ~]# 


    2.查看istio-proxy和业务容器c1的网络IP地址
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c c1 -- ifconfig
eth0      Link encap:Ethernet  HWaddr 66:B8:20:A5:80:02  
          inet addr:10.100.1.35  Bcast:10.100.1.255  Mask:255.255.255.0
          inet6 addr: fe80::64b8:20ff:fea5:8002/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:4505 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3968 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:716322 (699.5 KiB)  TX bytes:4086232 (3.8 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2338 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2338 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:5138923 (4.9 MiB)  TX bytes:5138923 (4.9 MiB)

[root@master241 ~]# 
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c istio-proxy -- ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.1.35  netmask 255.255.255.0  broadcast 10.100.1.255
        inet6 fe80::64b8:20ff:fea5:8002  prefixlen 64  scopeid 0x20<link>
        ether 66:b8:20:a5:80:02  txqueuelen 0  (Ethernet)
        RX packets 4541  bytes 719526 (719.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3998  bytes 4089196 (4.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2356  bytes 5141593 (5.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2356  bytes 5141593 (5.1 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@master241 ~]# 


    3.查看istio-proxy和业务容器c1的进程信息
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c c1 -- netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:15006           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15006           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15021           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15021           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro
tcp        0      0 0.0.0.0:15090           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15090           0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:15000         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15001           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:15001           0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:15004         0.0.0.0:*               LISTEN      -
tcp        0      0 :::15020                :::*                    LISTEN      -
tcp        0      0 :::80                   :::*                    LISTEN      1/nginx: master pro
[root@master241 ~]# 
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c istio-proxy -- netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:15006           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15006           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15021           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15021           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:15090           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15090           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 127.0.0.1:15000         0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15001           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 0.0.0.0:15001           0.0.0.0:*               LISTEN      14/envoy            
tcp        0      0 127.0.0.1:15004         0.0.0.0:*               LISTEN      1/pilot-agent       
tcp6       0      0 :::15020                :::*                    LISTEN      1/pilot-agent       
tcp6       0      0 :::80                   :::*                    LISTEN      -                   
[root@master241 ~]# 
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c istio-proxy -- ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
istio-p+       1       0  0 13:27 ?        00:00:04 /usr/local/bin/pilot-agent p
istio-p+      14       1  0 13:27 ?        00:00:15 /usr/local/bin/envoy -c etc/
istio-p+      49       0  0 14:08 pts/0    00:00:00 ps -ef
[root@master241 ~]# 
[root@master241 ~]# kubectl -n yinzhengjie exec -it deploy-apps-548b56cf95-rlr77 -c c1 -- ps -ef
PID   USER     TIME  COMMAND
    1 root      0:00 nginx: master process nginx -g daemon off;
   32 nginx     0:00 nginx: worker process
   33 nginx     0:00 nginx: worker process
   57 root      0:00 ps -ef
[root@master241 ~]# 


    4.查看istio-init初始化容器的日志都做了哪些工作
[root@master241 ~]# kubectl -n yinzhengjie logs deploy-apps-548b56cf95-rlr77 -c istio-init
2024-03-11T13:27:56.773482Z    info    Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=

2024-03-11T13:27:56.773642Z    info    Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
NETWORK_NAMESPACE=
CNI_MODE=false
HOST_NSENTER_EXEC=false
EXCLUDE_INTERFACES=

2024-03-11T13:27:56.773822Z    info    Running iptables-restore with the following input:
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2024-03-11T13:27:56.773857Z    info    Running command: iptables-restore --noflush
2024-03-11T13:27:56.791702Z    info    Running ip6tables-restore with the following input:

2024-03-11T13:27:56.791826Z    info    Running command: ip6tables-restore --noflush
2024-03-11T13:27:56.795429Z    info    Running command: iptables-save 
2024-03-11T13:27:56.803831Z    info    Command output: 
# Generated by iptables-save v1.8.7 on Mon Mar 11 13:27:56 2024
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Mon Mar 11 13:27:56 2024

[root@master241 ~]#

二.流量管理之路由(权重路由模拟灰度发布)

1.什么是流量管理

所谓的流量管理就是管理流量请求的分发,比如: 负载均衡器,灰度发布,lvs,nginx,haproxy等之类的技术栈。

2.编写资源清单

[root@master241 01-route]# cat 01-deploy-apps.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: yinzhengjie

---

apiVersion: apps/v1
# 注意,创建pod建议使用deploy资源,不要使用rc资源,否则istioctl可能无法手动注入。
kind: Deployment
metadata:
  name: apps-v1
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: xiuxian01
      version: v1
      auther: yinzhengjie
  template:
    metadata:
      labels:
        app: xiuxian01
        version: v1
        auther: yinzhengjie
    spec:
      containers:
      - name: c1
        ports:
        - containerPort: 80
        #image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 
        image: busybox
        command: ["/bin/sh","-c","echo 'c1' > /var/www/index.html;httpd -f -p 80 -h /var/www"]
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apps-v2
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: xiuxian02
      version: v2
      auther: yinzhengjie
  template:
    metadata:
      labels:
        app: xiuxian02
        version: v2
        auther: yinzhengjie
    spec:
      containers:
      - name: c2
        ports:
        - containerPort: 80
        # image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
        image: busybox
        command: ["/bin/sh","-c","echo 'c2' > /var/www/index.html;httpd -f -p 80 -h /var/www"]
[root@master241 01-route]# 
[root@master241 01-route]# cat 02-svc-apps.yaml 
apiVersion: v1
kind: Service
metadata:
  name: apps-svc-v1
  namespace: yinzhengjie
spec:
  selector:
    version: v1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

---

apiVersion: v1
kind: Service
metadata:
  name: apps-svc-v2
  namespace: yinzhengjie
spec:
  selector:
    version: v2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

---

apiVersion: v1
kind: Service
metadata:
  name: apps-svc-all
  namespace: yinzhengjie
spec:
  selector:
    auther: yinzhengjie
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

[root@master241 01-route]# 
[root@master241 01-route]# cat 03-deploy-client.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apps-client
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client-test
  template:
    metadata:
      labels:
        app: client-test
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 
        command:
        - tail
        - -f
        - /etc/hosts
[root@master241 01-route]# 
[root@master241 01-route]# 
[root@master241 01-route]# cat 04-vs-apps-svc-all.yaml 
apiVersion: networking.istio.io/v1beta1
# apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: apps-svc-all-vs
  namespace: yinzhengjie
spec:
  # 指定vs关联的后端svc名称
  hosts:
  - apps-svc-all
  # 配置http配置
  http:
    # 定义路由信息
  - route:
      # 定义目标
    - destination:
        host: apps-svc-v1
      # 指定权重
      weight: 90
    - destination:
        host: apps-svc-v2
      weight: 10
[root@master241 01-route]#

3.手动注入Istio-proxy

    1.注入前
[root@master241 yinzhengjie]# kubectl get pods -n yinzhengjie 
NAME                          READY   STATUS    RESTARTS   AGE
apps-client-f84c89565-kmqkv   1/1     Running   0          31s
apps-v1-9bff7546c-fsnmn       1/1     Running   0          32s
apps-v2-6c957bf64b-lz65z      1/1     Running   0          32s
[root@master241 yinzhengjie]# 




    2.开始手动注入
[root@master241 yinzhengjie]# istioctl kube-inject -f 03-deploy-client.yaml | kubectl -n yinzhengjie apply -f -
deployment.apps/apps-client configured
[root@master241 yinzhengjie]# 
[root@master241 yinzhengjie]# istioctl kube-inject -f 01-deploy-apps.yaml | kubectl -n yinzhengjie apply -f -
namespace/yinzhengjie unchanged
deployment.apps/apps-v1 configured
deployment.apps/apps-v2 configured
[root@master241 yinzhengjie]# 


    3.注入后
[root@master241 yinzhengjie]# kubectl get pods -n yinzhengjie 
NAME                          READY   STATUS        RESTARTS   AGE
apps-client-5cc67d864-g2r2v   2/2     Running       0          41s
apps-v1-85c976498b-5qp59      2/2     Running       0          30s
apps-v2-5bb84548fc-65r7x      2/2     Running       0          30s
[root@master241 yinzhengjie]#

4.开始测试

[root@master241 yinzhengjie]# kubectl -n yinzhengjie exec -it apps-client-5cc67d864-g2r2v -- sh
/ # 
/ # while true; do curl http://apps-svc-all;sleep 0.1;done
c1
c1
c1
c1
c1
c1
c1
c1
c1
c2
...

5.可能会出现的问题

无论怎么测试都不出数据!!!!很奇怪!!!


我怀疑是需要添加网关信息??

建议参考官方的:配置案例。。。。。。

[root@master241 istio-1.17.8]# cat  samples/bookinfo/networking/bookinfo-gateway.yaml

三.流量管理之基于用户匹配(定向路由模拟A/B测试)

1.编写资源清单

[root@master241 02-match]# cat 01-deploy-apps.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: yinzhengjie

---

apiVersion: apps/v1
# 注意,创建pod建议使用deploy资源,不要使用rc资源,否则istioctl可能无法手动注入。
kind: Deployment
metadata:
  name: apps-v1
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: xiuxian01
      version: v1
      auther: yinzhengjie
  template:
    metadata:
      labels:
        app: xiuxian01
        version: v1
        auther: yinzhengjie
    spec:
      containers:
      - name: c1
        ports:
        - containerPort: 80
        #image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 
        image: busybox
        command: ["/bin/sh","-c","echo 'c1' > /var/www/index.html;httpd -f -p 80 -h /var/www"]
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apps-v2
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: xiuxian02
      version: v2
      auther: yinzhengjie
  template:
    metadata:
      labels:
        app: xiuxian02
        version: v2
        auther: yinzhengjie
    spec:
      containers:
      - name: c2
        ports:
        - containerPort: 80
        # image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
        image: busybox
        command: ["/bin/sh","-c","echo 'c2' > /var/www/index.html;httpd -f -p 80 -h /var/www"]
[root@master241 02-match]# 
[root@master241 02-match]# 
[root@master241 02-match]# cat 02-svc-apps.yaml 
apiVersion: v1
kind: Service
metadata:
  name: apps-svc-v1
  namespace: yinzhengjie
spec:
  selector:
    version: v1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

---

apiVersion: v1
kind: Service
metadata:
  name: apps-svc-v2
  namespace: yinzhengjie
spec:
  selector:
    version: v2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

---

apiVersion: v1
kind: Service
metadata:
  name: apps-svc-all
  namespace: yinzhengjie
spec:
  selector:
    auther: yinzhengjie
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http

[root@master241 02-match]# 
[root@master241 02-match]# 
[root@master241 02-match]# cat 03-deploy-client.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apps-client
  namespace: yinzhengjie
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client-test
  template:
    metadata:
      labels:
        app: client-test
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 
        command:
        - tail
        - -f
        - /etc/hosts
[root@master241 02-match]# 
[root@master241 02-match]# 
[root@master241 02-match]# cat 04-vs-apps-svc-all.yaml 
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: apps-svc-all-vs
  namespace: yinzhengjie
spec:
  hosts:
  - apps-svc-all
  http:
    # 定义匹配规则
  - match:
      # 基于header信息匹配将其进行路由,header信息自定义即可。
    - headers:
        # 匹配用户名包含"jasonyin"的用户,这个KEY是咱们自定义的。
        yinzhengjie-username:
          # "eaxct"关键词是包含,也可以使用"prefix"进行前缀匹配。
          exact: jasonyin
    route:
    - destination:
        host: apps-svc-v1
  - route:
    - destination:
        host: apps-svc-v2
[root@master241 02-match]#

2.手动注入

istioctl kube-inject -f 03-deploy-client.yaml | kubectl -n yinzhengjie apply -f -
istioctl kube-inject -f 01-deploy-apps.yaml | kubectl -n yinzhengjie apply -f -
kubectl get all -n yinzhengjie

3.开始测试

[root@master241 yinzhengjie]# kubectl -n yinzhengjie exec -it apps-client-5cc67d864-g2r2v -- sh
/ # 
/ # while true; do curl -H  "yinzhengjie-username:jasonyin" http://apps-svc-all;sleep 0.1;done  # 添加用户认证的header信息
c1
c1
c1
c1
c1
c1
c1
...



/ # while true; do curl  http://apps-svc-all;sleep 0.1;done  # 不添加用户认证
c2
c2
c2
c2
c2
c2
c2
...

四.流量治理之bookInfo基于用户匹配案例

1.创建DestinationRule,VirtualService资源

    1.创建DestinationRule资源
[root@master241 istio-1.17.8]# kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml 
destinationrule.networking.istio.io/productpage created
destinationrule.networking.istio.io/reviews created
destinationrule.networking.istio.io/ratings created
destinationrule.networking.istio.io/details created
[root@master241 istio-1.17.8]# 
[root@master241 istio-1.17.8]# kubectl get dr
NAME          HOST          AGE
details       details       5s
productpage   productpage   5s
ratings       ratings       5s
reviews       reviews       5s
[root@master241 istio-1.17.8]# 


    2.创建VirtualService资源
[root@master241 istio-1.17.8]# kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml 
virtualservice.networking.istio.io/productpage created
virtualservice.networking.istio.io/reviews configured
virtualservice.networking.istio.io/ratings created
virtualservice.networking.istio.io/details created
[root@master241 istio-1.17.8]# 
[root@master241 istio-1.17.8]# kubectl get vs
NAME          GATEWAYS               HOSTS             AGE
bookinfo      ["bookinfo-gateway"]   ["*"]             4d8h
details                              ["details"]       6s
productpage                          ["productpage"]   6s
ratings                              ["ratings"]       6s
[root@master241 istio-1.17.8]#

2.编写VirtualService的资源清单

    1.编写资源清单
[root@master241 03-match-bookinfo]# cat 01-vs-bookinfo-reviews 
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: default
spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: yinzhengjie
    route:
      # 匹配DestinationRule规则
    - destination:
        # 在官方的"samples/bookinfo/networking/destination-rule-all.yaml"文件中有相应的记录
        # 如下所示,在官方的资源清单的确找到 了名称为"reviews"的estinationRule资源,我将其记录部分如下:
        #   apiVersion: networking.istio.io/v1alpha3
        #   kind: DestinationRule
        #   metadata:
        #     name: reviews
        #   spec:
        #     host: reviews
        #     subsets:
        #     - name: v1
        #       labels:
        #         version: v1
        #     - name: v2
        #       labels:
        #         version: v2
        #     - name: v3
        #       labels:
        #         version: v3
        # 
        host: reviews
        # 这个subset是指官方的reviews的DestinationRule规则subsets列表中一个名为v2的名称。
        # 具体的资源清单可以参考上面我抄写官方的代码进行参考,只不过v2对应的dr资源会去基于标签来匹配pod。
        subset: v2
  - route:
    - destination:
        host: reviews
[root@master241 03-match-bookinfo]# 


    2.创建资源
[root@master241 03-match-bookinfo]# kubectl apply -f 01-vs-bookinfo-reviews 
virtualservice.networking.istio.io/reviews created
[root@master241 03-match-bookinfo]# 
[root@master241 03-match-bookinfo]# kubectl get vs
NAME          GATEWAYS               HOSTS             AGE
bookinfo      ["bookinfo-gateway"]   ["*"]             4d8h
details                              ["details"]       2m29s
productpage                          ["productpage"]   2m29s
ratings                              ["ratings"]       2m29s
reviews                              ["reviews"]       2s
[root@master241 03-match-bookinfo]#

3.访问测试

    1.查看svc的NodePort端口,若未指定,使用"kubectl edit svc"命令修改类型即可
[root@master241 03-match-bookinfo]# kubectl get svc productpage 
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
productpage   NodePort   10.200.92.185   <none>        9080:31533/TCP   4d8h
[root@master241 03-match-bookinfo]# 


    2.如上图所示,我们点击登录按钮进行测试
http://10.0.0.241:31533/productpage


    3.如下图所示,不难发现我们已经登录成功啦

相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
存储
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
这篇文章是关于Ceph Reef(18.2.X)版本中CephFS高可用集群的实战案例,涵盖了CephFS的基础知识、一主一从架构的搭建、多主一从架构的配置、客户端挂载方式以及fuse方式访问CephFS的详细步骤和配置。
381 3
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
|
存储 关系型数据库 块存储
Ceph Reef(18.2.X)集群的状态管理实战
这篇文章是关于Ceph Reef(18.2.X)集群的状态管理实战,包括如何检查集群状态、OSD状态、MON监视器映射、PG和OSD存储对应关系,以及如何通过套接字管理集群和修改集群配置的详细指南。
369 4
|
Kubernetes 负载均衡 应用服务中间件
kubeadm快速构建K8S1.28.1高可用集群
关于如何使用kubeadm快速构建Kubernetes 1.28.1高可用集群的详细教程。
508 3
|
Kubernetes 监控 容器
Istio安装及Bookinfo环境部署
文章详细介绍了如何在Kubernetes集群上安装和配置Istio服务网格,并通过部署Bookinfo示例应用来演示Istio的核心功能,如流量管理、服务监控和故障注入等。
251 1
Istio安装及Bookinfo环境部署
|
存储 Kubernetes 测试技术
k8s使用pvc,pv,sc关联ceph集群
文章介绍了如何在Kubernetes中使用PersistentVolumeClaim (PVC)、PersistentVolume (PV) 和StorageClass (SC) 来关联Ceph集群,包括创建Ceph镜像、配置访问密钥、删除默认存储类、编写和应用资源清单、创建资源以及进行访问测试的步骤。同时,还提供了如何使用RBD动态存储类来关联Ceph集群的指南。
462 7
|
存储 Kubernetes 数据安全/隐私保护
k8s对接ceph集群的分布式文件系统CephFS
文章介绍了如何在Kubernetes集群中使用CephFS作为持久化存储,包括通过secretFile和secretRef两种方式进行认证和配置。
590 5
|
存储
cephFS高可用分布式文件系统部署指南
关于如何部署高可用的cephFS分布式文件系统,包括集群的搭建、验证高可用性以及实现两主一从架构的详细指南。
965 10
|
存储
ceph集群存储池的资源配额
这篇文章讲解了如何在Ceph集群中为存储池设置资源配额,包括创建存储池、查看和设置存储池的最大对象数量和最大存储容量的限制。
288 2
ceph集群用户管理实战指南
这篇文章提供了Ceph集群用户管理的详细指南,包括用户格式和权限说明、创建和删除用户、修改用户权限、用户备份和恢复,以及如何导出和验证用户授权文件。
301 1
|
存储 关系型数据库
ceph的存储池管理
本文介绍了Ceph存储池的管理,包括存储池的类型、PG数量的计算方法、创建和查看存储池、修改存储池信息以及删除存储池的操作步骤和注意事项。
593 2