Istio injection 注入原理解析

发布时间: 更新时间: 总字数:2430 阅读时间:5m 作者: IP上海 分享 网址

Istio injection简单的说,会将原Pod通过initContainer初始化iptables,通过sidecar代理流量。

介绍

支持注入的资源

  • Job
  • DaemonSet
  • ReplicaSet
  • Pod
  • Deployment
  • Service、Secrets、ConfigMap 注入前后配置不变?

注入方式

支持两种注入方式:

  • 自动对整个 namespace 下的 Pod 注入
  • 手动注入

namespace 自动注入

通过为 namespace 打标签注入,默认的 namespace 不启用注入:

kubectl get ns <ns-name> --show-labels

# 启用
kubectl label namespace <ns-name> istio-injection=enabled

# 取消
kubectl label namespace <ns-name> istio-injection=disabled

手动注入

  • 相关命令
# 创建命名空间
kubectl create ns test

# 创建非注入的
kubectl apply -f deployment-hello-app.yaml -n test

# 生成注入的 yaml
istioctl kube-inject -f deployment-hello-app.yaml -o deployment-hello-app-istio-inject.yaml

# 或直接部署
istioctl kube-inject -f deployment-hello-app.yaml | kubectl apply -f - -n test
  • deployment-hello-app.yaml 配置,为演示方便
    • replicas: 2 修改为 replicas: 1
    • 删除 namespace: default
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-app-dp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-app
      release: canary
  template:
    metadata:
      name: hello-app-pod
      labels:
        app: hello-app
        release: canary
    spec:
      containers:
      - name: hello-app-1
        image: gcriogooglesamples/hello-app:1.0
        ports:
        - name: http
          containerPort: 8080

执行后资源信息:

$ kubectl apply -f deployment-hello-app.yaml -n test
$ kubectl -n test get pod
NAME                            READY   STATUS    RESTARTS   AGE
hello-app-dp-7546b9dd7c-d4m6p   1/1     Running   0          11m
  • deployment-hello-app-istio-inject.yaml 配置信息
istio inject yaml ...
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  name: hello-app-dp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-app
      release: canary
  strategy: {}
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/default-container: hello-app-1
        kubectl.kubernetes.io/default-logs-container: hello-app-1
        prometheus.io/path: /stats/prometheus
        prometheus.io/port: "15020"
        prometheus.io/scrape: "true"
        sidecar.istio.io/status: '{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","workload-certs","istio-envoy","istio-data","istio-podinfo","istio-token","istiod-ca-cert"],"imagePullSecrets":null,"revision":"default"}'
      creationTimestamp: null
      labels:
        app: hello-app
        release: canary
        security.istio.io/tlsMode: istio
        service.istio.io/canonical-name: hello-app
        service.istio.io/canonical-revision: latest
      name: hello-app-pod
    spec:
      containers:
      - image: gcriogooglesamples/hello-app:1.0
        name: hello-app-1
        ports:
        - containerPort: 8080
          name: http
        resources: {}
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --proxyLogLevel=warning
        - --proxyComponentLogLevel=misc:error
        - --log_output_level=default:info
        - --concurrency
        - "2"
        env:
        - name: JWT_POLICY
          value: third-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: PROXY_CONFIG
          value: |
            {}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"name":"http","containerPort":8080}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: hello-app-1
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: hello-app-pod
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/v1/namespaces/default/pods/hello-app-pod
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: TRUST_DOMAIN
          value: cluster.local
        image: docker.io/istio/proxyv2:1.14.3
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: 1
          periodSeconds: 2
          timeoutSeconds: 3
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /var/run/secrets/workload-spiffe-uds
          name: workload-socket
        - mountPath: /var/run/secrets/workload-spiffe-credentials
          name: workload-certs
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        - mountPath: /var/lib/istio/data
          name: istio-data
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /var/run/secrets/tokens
          name: istio-token
        - mountPath: /etc/istio/pod
          name: istio-podinfo
      initContainers:
      - args:
        - istio-iptables
        - -p
        - "15001"
        - -z
        - "15006"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - '*'
        - -d
        - 15090,15021,15020
        image: docker.io/istio/proxyv2:1.14.3
        name: istio-init
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 40Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: false
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
      volumes:
      - name: workload-socket
      - name: workload-certs
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - emptyDir: {}
        name: istio-data
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              fieldPath: metadata.annotations
            path: annotations
        name: istio-podinfo
      - name: istio-token
        projected:
          sources:
          - serviceAccountToken:
              audience: istio-ca
              expirationSeconds: 43200
              path: istio-token
      - configMap:
          name: istio-ca-root-cert
        name: istiod-ca-cert
status: {}
---

执行注入:

$ kubectl apply -f deployment-hello-app-istio-inject.yaml -n test
deployment.apps/hello-app-dp configured
$ kubectl -n test get pod
NAME                            READY   STATUS        RESTARTS   AGE
hello-app-dp-7546b9dd7c-d4m6p   1/1     Terminating   0          12m
hello-app-dp-8668d4c488-2tcmd   2/2     Running       0          5s

原容器删除,新建的容器包含2个container:

  • 新增一个 sidecar,名称为 istio-proxy
  • 新增一个 initContainers,名称为 istio-init

注入的工作原理

监听端口发生变化

  • 执行 istio-init 后,监听端口发生变化
# 原生的容器端口
kubectl exec -it hello-app-dp-7546b9dd7c-lrm4l -- netstat -lpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 :::8080                 :::*                    LISTEN      1/hello-app
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path

# 注入后的容器端口
$ kubectl -n test exec -it hello-app-dp-8668d4c488-2tcmd -c istio-proxy -- ss -ltp
State    Recv-Q   Send-Q      Local Address:Port           Peer Address:Port   Process
LISTEN   0        4096              0.0.0.0:15021               0.0.0.0:*       users:(("envoy",pid=17,fd=24))
LISTEN   0        4096              0.0.0.0:15021               0.0.0.0:*       users:(("envoy",pid=17,fd=23))
LISTEN   0        4096              0.0.0.0:15090               0.0.0.0:*       users:(("envoy",pid=17,fd=22))
LISTEN   0        4096              0.0.0.0:15090               0.0.0.0:*       users:(("envoy",pid=17,fd=21))
LISTEN   0        4096            127.0.0.1:15000               0.0.0.0:*       users:(("envoy",pid=17,fd=18))
LISTEN   0        4096              0.0.0.0:15001               0.0.0.0:*       users:(("envoy",pid=17,fd=35))
LISTEN   0        4096              0.0.0.0:15001               0.0.0.0:*       users:(("envoy",pid=17,fd=34))
LISTEN   0        4096            127.0.0.1:15004               0.0.0.0:*       users:(("pilot-agent",pid=1,fd=14))
LISTEN   0        4096              0.0.0.0:15006               0.0.0.0:*       users:(("envoy",pid=17,fd=37))
LISTEN   0        4096              0.0.0.0:15006               0.0.0.0:*       users:(("envoy",pid=17,fd=36))
LISTEN   0        4096                    *:15020                     *:*       users:(("pilot-agent",pid=1,fd=3))
LISTEN   0        4096                    *:http-alt                  *:*

$ kubectl -n test exec -it hello-app-dp-8668d4c488-2tcmd -c hello-app-1 -- netstat -lpnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
...
tcp        0      0 :::8080                 :::*                    LISTEN      1/hello-app

iptables 规则的修改

  • istio-init 容器执行日志,可以看出是对 iptables 规则的修改
kubectl -n test logs -f hello-app-dp-8668d4c488-2tcmd -c istio-init
2022-08-27T19:52:02.252956Z	info	Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=

2022-08-27T19:52:02.253713Z	info	Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
OUTPUT_PATH=
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=

2022-08-27T19:52:02.254165Z	info	Writing following contents to rules file: /tmp/iptables-rules-1661629922253772697.txt3714470063
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-08-27T19:52:02.254323Z	info	Running command: iptables-restore --noflush /tmp/iptables-rules-1661629922253772697.txt3714470063
2022-08-27T19:52:02.432156Z	info	Writing following contents to rules file: /tmp/ip6tables-rules-1661629922431404132.txt2557741249

2022-08-27T19:52:02.432257Z	info	Running command: ip6tables-restore --noflush /tmp/ip6tables-rules-1661629922431404132.txt2557741249
2022-08-27T19:52:02.434491Z	info	Running command: iptables-save
2022-08-27T19:52:02.438479Z	info	Command output:
# Generated by iptables-save v1.8.4 on Sat Aug 27 19:52:02 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Sat Aug 27 19:52:02 2022
  • iptables 规则的变化,由于直接在 docker 容器内执行 iptables -nvL -t nat 错误见 FaQ。以下两种方式均可进入查看:
# 查找节点
$ kubectl -n test describe pod hello-app-dp-8668d4c488-2tcmd | grep "Node:"
Node:             k8s-node-2/172.20.0.243

# 到对应的节点,找容器 id
$ docker ps | grep hello-app-dp-8668d4c488-2tcmd | grep istio-proxy | awk '{print $1}'
a51c763fd3c4

# 根据容器 id 找进程 id
$ docker inspect a51c763fd3c4 | grep "Pid"
            "Pid": 245557,
            "PidMode": "",
            "PidsLimit": null,
root@k8s-node-2:~# nsenter -n -t 245557
root@k8s-node-2:~# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 2471 packets, 148K bytes)
Chain PREROUTING (policy ACCEPT 1592 packets, 95533 bytes)
 pkts bytes target     prot opt in     out     source               destination
 1591 95460 ISTIO_INBOUND  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain INPUT (policy ACCEPT 1591 packets, 95460 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 163 packets, 13770 bytes)
 pkts bytes target     prot opt in     out     source               destination
    9   540 ISTIO_OUTPUT  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 163 packets, 13770 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain ISTIO_INBOUND (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:15008
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:15090
 1591 95460 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:15021
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:15020
    0     0 ISTIO_IN_REDIRECT  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain ISTIO_IN_REDIRECT (3 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            redir ports 15006

Chain ISTIO_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      lo      127.0.0.6            0.0.0.0/0
    0     0 ISTIO_IN_REDIRECT  all  --  *      lo      0.0.0.0/0           !127.0.0.1            owner UID match 1337
    0     0 RETURN     all  --  *      lo      0.0.0.0/0            0.0.0.0/0            ! owner UID match 1337
    9   540 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner UID match 1337
    0     0 ISTIO_IN_REDIRECT  all  --  *      lo      0.0.0.0/0           !127.0.0.1            owner GID match 1337
    0     0 RETURN     all  --  *      lo      0.0.0.0/0            0.0.0.0/0            ! owner GID match 1337
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner GID match 1337
    0     0 RETURN     all  --  *      *       0.0.0.0/0            127.0.0.1
    0     0 ISTIO_REDIRECT  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain ISTIO_REDIRECT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            redir ports 15001
$ mkdir -p /var/run/netns
$ ln -s /proc/245557/ns/net /var/run/netns/245557
$ ip netns ls
245557 (id: 3)
$ ip netns exec 245557 iptables -nvL -t nat
...

istio-proxy 进程信息

$ kubectl -n test exec -it hello-app-dp-8668d4c488-2tcmd -c istio-proxy -- ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
istio-p+       1       0  0 19:52 ?        00:00:02 /usr/local/bin/pilot-agent proxy sidecar --domain test.svc.cluster
istio-p+      17       1  0 19:52 ?        00:00:27 /usr/local/bin/envoy -c etc/istio/proxy/envoy-rev0.json --restart-
istio-p+     141       0  2 20:52 pts/0    00:00:00 ps -ef

istiod 进程

istiod 运行在 istio-system 名称空间下:

root@k8s-master:~# kubectl -n istio-system exec -it istiod-8675d9c57b-kdxnz -- bash
istio-proxy@istiod-8675d9c57b-kdxnz:/$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
istio-p+       1       0  0 18:23 ?        00:00:35 /usr/local/bin/pilot-discovery discovery --monitoringAddr=:15014 -
istio-p+      48       0  0 20:58 pts/0    00:00:00 bash
istio-p+      56      48  0 20:58 pts/0    00:00:00 ps -ef

总结

  • 原容器的端口没有发生变化,仍未 8080
  • 创建后容器 iptables 规则发生变化
  • istio-proxy 容器内运行的进程如下:
    • envoy
      • 127.0.0.1:15000 Envoy admin port(commands/diagnostics)
      • 0.0.0.0:15001 Envoy Outbound
      • 0.0.0.0:15006 Envoy Inbound
      • *:15020 prometheus Health checkes
      • 0.0.0.0:15021 就绪探针 readinessProbe
      • 0.0.0.0:15090 Prometheus telemetry
    • pilot-agent,从 istio-proxy 容器的进程信息可以猜测 pilot-agent 作用:
      • pilot-agent 先启动,并生成 envoy 启动配置
      • 启动 envoy
      • 监控并管理 envoy 的运行情况,如 envoy 的配置变更后的重新加载等

faq

容器内部执行 sudo 命令报错

$ docker ps | grep hello-app-dp-8668d4c488-2tcmd | grep istio-proxy | awk '{print $1}'
a51c763fd3c4
root@k8s-node-2:~# docker exec -it --privileged a51c763fd3c4 bash
istio-proxy@hello-app-dp-8668d4c488-2tcmd:/$ iptables -nvL -t nat
Fatal: can't open lock file /run/xtables.lock: Read-only file system
istio-proxy@hello-app-dp-8668d4c488-2tcmd:/$ sudo iptables -nvL -t nat
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

参考

Home Archives Categories Tags Statistics
本文总阅读量 次 本站总访问量 次 本站总访客数