Kubernetes Calico 网络介绍,它比较复杂,支持网络策略功能,可以作为 flannel 的网络策略插件模式工作。
介绍
Calico 是一个三层隧道(IP-IP 隧道)。默认工作在 192.168.0.0/16
网段。
网络策略参考:NetworkPolicy
安装
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
# 查看
watch kubectl get pods -n calico-system
安装 Calico network policy controller
为 flannel 提供网络策略功能,实现参考。
curl https://projectcalico.docs.tigera.io/manifests/canal.yaml -O
kubectl apply -f canal.yaml
$ kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-56fcbf9d6b-2mrpp 1/1 Running 0 8m16s
canal-b5dqt 2/2 Running 0 8m16s
canal-th62l 2/2 Running 0 8m16s
canal-tsqfj 2/2 Running 0 8m16s
示例
kubectl create namespace dev
kubectl create namespace uat
kubectl run dev-hello-app-1 --image=gcriogooglesamples/hello-app:1.0 -n dev
kubectl run uat-hello-app-1 --image=gcriogooglesamples/hello-app:1.0 -n uat
root@k8s-master:~# kubectl -n uat get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
uat-hello-app-1 1/1 Running 0 111s 10.244.2.5 k8s-node-2 <none> <none>
root@k8s-master:~# kubectl -n dev get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dev-hello-app-1 1/1 Running 0 3m52s 10.244.2.3 k8s-node-2 <none> <none>
拒绝所有 Ingress 流量
- 拒绝所有 Ingress 流量:
deny-all-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
# namespace: dev
spec:
podSelector: {}
policyTypes:
- Ingress
- 创建 netpol 后,
dev-hello-app-1
是 ping 是不通的
$ kubectl apply -f deny-all-ingress.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress created
$ kubectl -n dev get netpol
NAME POD-SELECTOR AGE
deny-all-ingress <none> 4m36s
$ kubectl -n dev describe netpol deny-all-ingress
Name: deny-all-ingress
Namespace: dev
Created on: 2022-03-26 04:39:48 +0800 CST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Not affecting egress traffic
Policy Types: Ingress
# ping
root@k8s-master:~/manifests/networkpolicy# ping -c1 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
--- 10.244.2.3 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
允许所有 Ingress 流量
- 运行所有 Ingress 流量:
allow-all-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
# namespace: dev
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
- 创建 netpol 后,
dev-hello-app-1
是 ping 是不通的
$ kubectl apply -f allow-all-ingress.yaml -n dev
networkpolicy.networking.k8s.io/allow-all-ingress created
$ kubectl -n dev get netpol
NAME POD-SELECTOR AGE
allow-all-ingress <none> 7s
$ kubectl -n dev describe netpol allow-all-ingress
Name: allow-all-ingress
Namespace: dev
Created on: 2022-03-24 04:49:37 +0800 CST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From: <any> (traffic not restricted by source)
Not affecting egress traffic
Policy Types: Ingress
# ping
root@k8s-master:~/manifests/networkpolicy# ping -c1 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
64 bytes from 10.244.2.3: icmp_seq=1 ttl=63 time=0.690 ms
--- 10.244.2.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms
通过标签定制 Pod 的规则
允许特定的入站流量
kubectl label pod dev-hello-app-1 env=dev -n dev
- 放行 8080 端口 allow-hello-app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-hello-app-ingress
spec:
podSelector:
matchLabels:
env: dev
ingress:
- from:
- ipBlock:
cidr: 10.244.0.0/16
except:
- 10.244.2.5/32
ports:
- protocol: TCP
port: 8080
policyTypes:
- Ingress
规则说明:运行所有 10.244.0.0/16
的入站流量,除了 10.244.2.5/32(uat-hello-app-1) 之外
$ kubectl -n dev apply -f allow-hello-app-ingress.yaml
networkpolicy.networking.k8s.io/allow-hello-app-ingress created
root@k8s-master:~/manifests/networkpolicy# curl 10.244.2.3:8080
Hello, world!
Version: 1.0.0
Hostname: dev-hello-app-1
root@k8s-master:~/manifests/networkpolicy# ping -c1 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
--- 10.244.2.3 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
root@k8s-master:~/manifests/networkpolicy# kubectl -n uat exec -it uat-hello-app-1 -- wget 10.244.2.3:8080
Connecting to 10.244.2.3:8080 (10.244.2.3:8080)
^Ccommand terminated with exit code 130
root@k8s-master:~/manifests/networkpolicy# kubectl -n uat exec -it uat-hello-app-1 -- ping -c1 10.244.2.3
PING 10.244.2.3 (10.244.2.3): 56 data bytes
--- 10.244.2.3 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
拒绝所有 Egress 流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
# namespace: dev
spec:
podSelector: {}
policyTypes:
- Egress
kubectl -n dev apply -f deny-all-egress.yaml
- 尝试在 dev 的 Pod 中 ping 外部 IP 实际不通
root@k8s-master:~/manifests/networkpolicy# kubectl -n dev exec -it dev-hello-app-1 -- ping -c1 10.244.2.5
PING 10.244.2.5 (10.244.2.5): 56 data bytes
--- 10.244.2.5 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
允许所有 Egress 流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
# namespace: dev
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
kubectl -n dev apply -f allow-all-egress.yaml
- 尝试在 dev 的 Pod 中 ping 外部 IP 实际通
root@k8s-master:~/manifests/networkpolicy# kubectl -n dev exec -it dev-hello-app-1 -- ping -c1 10.244.2.5
PING 10.244.2.5 (10.244.2.5): 56 data bytes
64 bytes from 10.244.2.5: seq=0 ttl=63 time=1.410 ms
--- 10.244.2.5 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.410/1.410/1.410 ms
F&Q
BIRD is not ready
错误日志:
calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
解决方法:
$ vim calico.yaml
# 增加内容
# Cluster type to identify the deployment type
- name: IP_AUTODETECTION_METHOD
value: "interface=eth.*" 或者 value: "interface=eth0"