Ingress Controller

发布时间: 更新时间: 总字数:3375 阅读时间:7m 作者: 分享 复制网址

本文介绍 Ingress/Ingress Controller 的作用、架构,并提供 ingress-nginx 的安装部署和示例。Service 工作在 TCP 四层,HTTPS 工作在七层,因此不能在 Service 实现该功能。Ingress 应运而生,它提供从 Kubernetes 集群外部到集群内服务的 HTTP 和 HTTPS 路由,支持负载均衡等。

介绍

架构

ingress nginx arch

代理架构:

client --> extLBaaS(四层) --> Ingress Controller(Deployment Pod 共享宿主机 net namespace,工作在七层,可管理 HTTPS 回话) --proxy(通过 Service 分组)--> Pod

说明:

  • Ingress 通过 Service 获取后端 Pod 对应的 Endpoints

什么是 Ingress

Ingress(入站) 是一个规则的集合,允许入站连接到达后端的端点(如Service 的 Endpoints)。Ingress 的功能如下:

  • 给服务提供外部可访问的URL
  • 负载平衡流量
  • 卸载 SSL
  • 提供基于名称的 虚拟主机(virtual hosting)

Ingress 资源定义,查看帮助:

kubectl explain ingress

Ingress Controllers

Ingress Controller 实质上可以理解为是个监视器,Ingress Controller 通过不断地跟 kubernetes API 打交道,实时的感知后端 servicepod 等变化,比如新增和减少 podservice 增加与减少等;当得到这些变化信息后,Ingress Controller 再结合 Ingress 生成配置,然后更新反向代理负载均衡器,并刷新其配置,达到服务发现的作用。

与其他类型的 controllers 不同,Ingress Controller 不是 kube-controller-manager 的一部分,Ingress Controller 默认是不启动的。

Ingress Controller 的实现

Kubernetes 作为一个 CNCF 的开源项目,目前支持和维护的控制器如下:

IngressClass

IngressClass 指定当前实现该控制器名称,用于区分一个 Kubernetes 环境中部署的多个 Ingress Controller。示例:

$ kubectl -n ingress-nginx get ingressclasses nginx -o yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1","kind":"IngressClass","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"1.1.1","helm.sh/chart":"ingress-nginx-4.0.15"},"name":"nginx"},"spec":{"controller":"k8s.io/ingress-nginx"}}
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    helm.sh/chart: ingress-nginx-4.0.15
  name: nginx
spec:
  controller: k8s.io/ingress-nginx

部署 ingress-nginx

  • 下载部署文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
  • 若安装失败,可以修改镜像源
k8s.gcr.io/ingress-nginx/controller -> k8sgcrioingressnginx/controller
k8s.gcr.io/ingress-nginx/kube-webhook-certgen -> k8sgcrioingressnginx/kube-webhook-certgen
k8s.gcr.io/ingress-nginx/kube-webhook-certgen -> k8sgcrioingressnginx/kube-webhook-certgen
  • 安装
$ kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
  • 部署 ingress-nginx kubectl-plugin(可选)

https://kubernetes.github.io/ingress-nginx/kubectl-plugin/

暴露服务方式

  • 使用 LoadBalancer,如 MetalLB 负载均衡器使用介绍,本示例使用的方式
  • Deployment Pod 共享宿主机 net namesapce
  • 也可以修改 service typeNodePort,如下:
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort    # 修改为 NodePort 类型 *
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080    # http请求对外映射 30080 端口 *
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443    # https请求对外映射30443端口 *
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

Upstream 配置

默认情况下 NGINX ingress controller NGINX upstream 配置的 endpoints 为 Pod IP/port,可以通过 nginx.ingress.kubernetes.io/service-upstream 配置

查看和访问 ingress-nginx

$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.105.71.57     172.20.0.200   80:30668/TCP,443:31198/TCP   99s
ingress-nginx-controller-admission   ClusterIP      10.105.141.141   <none>         443/TCP                      29m
$ kubectl -n ingress-nginx get ingressclasses.networking.k8s.io
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       45m

访问: https://172.20.0.200 进入 ingress-nginx,默认没有配置任何 backend。

日志

nginx-ingress-controller执行configmap,确定

 apiVersion: v1
 data:
    log-format: '{remote_address: $remote_addr, remote_user: "$remote_user", time_date: [$time_local], request: "$request", status: $status, http_referer: "$http_referer", http_user_agent: "$http_user_agent", request_id: $request_id}'
    log-format-escape-json: "true"
    enable-syslog: "true"
    syslog-host: <syslog-ip>
    syslog-port: "<syslog-port>"
 kind: ConfigMap
 metadata:
   name: nginx-ingress-controller-cm
   namespace: kube-system

nginx-ingress日志说明:

  • 推荐把日志发送到syslog中
    • 针对access_log、error_log通过上面的方式配置
    • 针对server的日志,推荐配置如下:
access_log syslog:server=[2001:db8::1]:1234,facility=local7,tag=nginx,severity=info;
  • /var/log/nginx/access.log默认重定向到/dev/stdout
  • /var/log/nginx/error.log默认重定向到/dev/stderr

示例

  • 后端 deployment:deployment-hello-app.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: hello-app
  namespace: default
spec:
  selector:
    app: hello-app
    release: canary
  type: ClusterIP
  ports:
  - name: port-80
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-app-dp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-app
      release: canary
  template:
    metadata:
      name: hello-app-pod
      labels:
        app: hello-app
        release: canary
    spec:
      containers:
      - name: hello-app
        image: gcriogooglesamples/hello-app:1.0
        ports:
        - name: http
          containerPort: 8080

准备自签发 SSL

openssl genrsa -out tls.key 2048
openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=SH/L=SH/O=IT/CN=hello-app.xiexianbin.cn/

创建 Secret

create-secret
$ kubectl create secret tls hello-app-secret --cert=tls.crt --key=tls.key
secret/hello-app-secret created
$ kubectl get secret hello-app-secret
NAME               TYPE                DATA   AGE
hello-app-secret   kubernetes.io/tls   2      12s
$ kubectl get secret hello-app-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqVENDQW5XZ0F3SUJBZ0lVVmd0SHA0aVhwVFZGbnMwTGlBNkNHcFQzZEhVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1ZqRUxNQWtHQTFVRUJoTUNRMDR4Q3pBSkJnTlZCQWdNQWxOSU1Rc3dDUVlEVlFRSERBSlRTREVMTUFrRwpBMVVFQ2d3Q1NWUXhJREFlQmdOVkJBTU1GMmhsYkd4dkxXRndjQzU0YVdWNGFXRnVZbWx1TG1OdU1CNFhEVEl5Ck1ETXlNVEEzTkRNd04xb1hEVEl5TURReU1EQTNORE13TjFvd1ZqRUxNQWtHQTFVRUJoTUNRMDR4Q3pBSkJnTlYKQkFnTUFsTklNUXN3Q1FZRFZRUUhEQUpUU0RFTE1Ba0dBMVVFQ2d3Q1NWUXhJREFlQmdOVkJBTU1GMmhsYkd4dgpMV0Z3Y0M1NGFXVjRhV0Z1WW1sdUxtTnVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDCkFRRUF1MkczQ0VJTWpraHc1a2d4UU4rOVBudDA0eFI1OFR1RTJ0WTRSYUM4OE1vRGszMGEvU0dOSG9PVDJDS1cKZFlkN01kSE42Z3ZqbVE1R2lVWmpFZWR3L3I1Tkg1Q1hwVUpUbzlhMTBVQXV5U3FtTmVIcTQyMmhPZndHRE5yQQpqVGQvajNsSEdaNjlsVnVUUWhhZ25RM1FUNzRYTitzMVdlVDZxNlJFb0pWTFFrQ2JENitRVHNWNnVLNWxvVXJOClNrVDhkbmdOVDMzTHJFaDlGRXFJclNIc28yRTVNc0lrYTZKTTh4ZGJNUEdPLzJ1aTBVQjJOMHJ2SFp1dENYd0MKSmF1clBwNlVnd0RpWG5XZEQxU0ZYbUovdmhvd2wySldaYW5UZGd0RVJzeDZUR2EzN2lHWUszNTR5R1l6cDg2awpKR2Ivc1ZURTRnS0doK1lxeVdveEFDcXdqUUlEQVFBQm8xTXdVVEFkQmdOVkhRNEVGZ1FVb3ppaUxOYWhWeVZ1ClRYWWRNRExKWGZHRzdYTXdId1lEVlIwakJCZ3dGb0FVb3ppaUxOYWhWeVZ1VFhZZE1ETEpYZkdHN1hNd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFtWmVSRjlxTm1mYTEvRTBCaExrYgo2YlU4YmxvUkxjSmZZVytHWnR0dnllckFvVmhGUUEySktlN2pBTEMvYy9HaXlnMk1rekk2U0RaNGxQL0ZmTVJ3CitiZnRYeHZWRTI1ZjdtcDdQQVpnK1ljWDh4dVcxeEdtUjY3Q245ZG9DdFFqYllESEpwcWJ6bFR6MHpCZ2hZMTUKV2oyejBHS2xWRkZJWkVnTGMrQWMybGl0ajhuQlJMeTFjUm5IUVpjUHFMVElXbWV3QkUrSmpERHI3dVdidTdsMgpFdlJJYjhlMVQ2Q0p3c2Z1cjYrV1RQL3VKVGM4ekg3RnhyUEtqVVFPdjJ5UGQ3cVJOSkVpSHNLWlBteVEvT2NKCmZsUWxIMXpSL09idDZkV1h0Um1oMlcxVkJia29Gd2RabmlmQ29Nb0Rhb0puTmo4NHlNWm9FTXNUTU03ZVk3ak8KSHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdTJHM0NFSU1qa2h3NWtneFFOKzlQbnQwNHhSNThUdUUydFk0UmFDODhNb0RrMzBhCi9TR05Ib09UMkNLV2RZZDdNZEhONmd2am1RNUdpVVpqRWVkdy9yNU5INUNYcFVKVG85YTEwVUF1eVNxbU5lSHEKNDIyaE9md0dETnJBalRkL2ozbEhHWjY5bFZ1VFFoYWduUTNRVDc0WE4rczFXZVQ2cTZSRW9KVkxRa0NiRDYrUQpUc1Y2dUs1bG9Vck5Ta1Q4ZG5nTlQzM0xyRWg5RkVxSXJTSHNvMkU1TXNJa2E2Sk04eGRiTVBHTy8ydWkwVUIyCk4wcnZIWnV0Q1h3Q0phdXJQcDZVZ3dEaVhuV2REMVNGWG1KL3Zob3dsMkpXWmFuVGRndEVSc3g2VEdhMzdpR1kKSzM1NHlHWXpwODZrSkdiL3NWVEU0Z0tHaCtZcXlXb3hBQ3F3alFJREFRQUJBb0lCQUZWTnI3QnZ1UjJROXV5dQowdFZReGV0SzhyUnAzdldsL2Q1T0JZSVZJYzZRZUw1TkJ0dFR5ZFdwV3NYYlFSbXlHckJjYmR4dG15aFRhbU1XCkN3WGNrZ0UyaXcraW1KYWdNa2wwOW9LVE1IbGVGQnFWaFlRUnBZTXJLMm53c0JYWnZSV0l4WWh4VHFkTzhDUDIKL3hsZ2I0UE42dTRIQXR1d3RUa3NhQjJldVlzbjdHZWcwY1lzVFhjMWwwV3BWcTlDRnhwUDhCcmJFc2t2WFR0NgpRSTA5NXltMVJ3YjBuTkkwVUYzQ2lMY1dSTVdoVTN1d253aFhtL3dLWkRPZDB0V2pvUFBCRHUzTTdJVjNmWjdKCjMreWJqSmYxWkczaXg0Z0s1MnI3aWY4WW5NVDdqYVZERnJQQ1YvS1Bvdk9mY1NwMUI2aDJNTHBCMjNtaXBvbGwKSGN0cENvRUNnWUVBNktyTEdMNWsxZFdMRklJNk5jaUFBaHc3YTdBUGwzVmlGN1FBSjVBbXgrQzROVG42dUFpcwp0Nm5DckJoNGRTUTdPU0FqeHhUWTNiWGpRcnBaK0lBMkd4RTRla2J4ZVdpeEhuTGR6K2RUYlBPSkhxWjdRZkYyCndtdWRUaGJDT0JCNGQrdXB5ZFBNZ2dYc2xsbjdhMUZtY0Z5Mi9mVUNFQUFES3BDQWNOVUZsMjBDZ1lFQXppeFIKc1pYbkdHL1M0ekd0NTNzYXMrL3VycWlLbmY0eTFobVEwUThnendLTUNwWGJjWlJRTnQvUkljQjRwL0VuaC9KWApIckMydWYrVFJ0UWRIZjdRRjF3NUFZWWV0d09EY2VKRlIyWUZ3Tk5qT0pPbDBCWDI0eXVUbERGT0JUSWFhczFrCklaVmpoZlBLZVJNbDU5NHA3UFQvQ1NOTFd5di9MWWJ3Vkk0K0thRUNnWUFCUnk3bWErVlI1MkprTW5MdmFMS0wKVUd4akl3eHk0SW94WnlPNUUrbWluM0ZqbVhYdkhOMFdCVEMwa1UzWUZ1TGNaWGpNMXloNXowMzRSOTNHcDYyawphR3ZQQUNURGJmZkxHd1pzNWZCbllNOFlCQUlaVXFJOFh5cjJDdG4yUk9Ea2g3N2ZCUExTcEFXd3JiM2IwUTZtCi8xdGgrYjZSSis3Y2hQNnZuL3Z2NFFLQmdRQzBKQWtsTHlNQ09RSjhQRFlFb1kxTlZ3Q25YdC91OStJWEs3TmEKMXVzRnRPWURnYmlCWHVOUGJ2UGRsN3hVa09MSFo3a3pPWmdPbi81Z3pvaTZZcUFUS1NNdDc2LzZuSGxIRWpzUwpEVlJOak9XTzA0TDNjNW1LRjlNVWtwZm05a1lhdDJjYjZObFNleGFYLzJFSlhSWW8way9iL2hpamlxZWxjZGVmCktjR3F3UUtCZ1FESTM5QmNXZGlwV001aHNHcWJFWlk2eDk5RE1hY1psTjBwbHY5NGVWNHhKMERSM2tEMFNpTDcKam9DNXNvcndmbDY0WlNMQ2FrVlMwejZGWk1KeWhsMFc3NGpFL21xQlJ2Q1JqYU9ENzhYa1BUUzM2YVBIbCt6eApDOVBPUm9IZ2VSc0N3bXJYbTNDbllKUXIzclk0OGpWT3FuQUplOUVKdDMram5sQWtsdDVKUEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: Secret
metadata:
  creationTimestamp: "2022-03-20T07:43:47Z"
  name: hello-app-secret
  namespace: default
  resourceVersion: "978625"
  uid: a11e6968-12ab-4a4e-9278-72ae8ebb3916
type: kubernetes.io/tls

Ingress 定义

  • ingress-hello-app.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-hello-app
  namespace: default
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - hello-app.xiexianbin.cn
    secretName: hello-app-secret
  rules:
  - host: hello-app.xiexianbin.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-app
            port:
              number: 8080

说明:

  • 通过 ingressClassName: nginx 指定 Ingress 为刚才创建的 nginx。
  • pathType
    • ImplementationSpecific:对于这种路径类型,匹配方法取决于 IngressClass
    • Exact:精确匹配 URL 路径,且区分大小写
    • Prefix:基于以 / 分隔的 URL 路径前缀匹配。匹配区分大小写,并且对路径中的元素逐个完成

创建 Ingress

$ kubectl apply -f deployment-hello-app.yaml
service/hello-app created
$ kubectl apply -f ingress-hello-app.yaml
ingress.networking.k8s.io/ingress-hello-app created

查询相关信息

$ kubectl get ingress
NAME                CLASS   HOSTS                     ADDRESS   PORTS     AGE
ingress-hello-app   nginx   hello-app.xiexianbin.cn             80, 443   34s
$ kubectl describe ingress ingress-hello-app
Name:             ingress-hello-app
Labels:           <none>
Namespace:        default
Address:          172.20.0.200
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  hello-app-secret terminates hello-app.xiexianbin.cn
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  hello-app.xiexianbin.cn
                           /   hello-app:8080 ()
Annotations:               <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    40h (x2 over 40h)  nginx-ingress-controller  Scheduled for sync

进入 pod 查看 nginx 配置

nginx-conf
$ kubectl -n ingress-nginx exec -it ingress-nginx-controller-65f644ffd4-zt9n2  -- sh
/etc/nginx
$ cat nginx.conf

# Configuration checksum: 18429259104850088794

# setup custom paths that do not require root access
pid /tmp/nginx.pid;

daemon off;

worker_processes 4;

worker_rlimit_nofile 1047552;

worker_shutdown_timeout 240s ;

events {
	multi_accept        on;
	worker_connections  16384;
	use                 epoll;
}

http {
	lua_package_path "/etc/nginx/lua/?.lua;;";

	lua_shared_dict balancer_ewma 10M;
	lua_shared_dict balancer_ewma_last_touched_at 10M;
	lua_shared_dict balancer_ewma_locks 1M;
	lua_shared_dict certificate_data 20M;
	lua_shared_dict certificate_servers 5M;
	lua_shared_dict configuration_data 20M;
	lua_shared_dict global_throttle_cache 10M;
	lua_shared_dict ocsp_response_cache 5M;

	init_by_lua_block {
		collectgarbage("collect")

		-- init modules
		local ok, res

		ok, res = pcall(require, "lua_ingress")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		lua_ingress = res
		lua_ingress.set_config({
			use_forwarded_headers = false,
			use_proxy_protocol = false,
			is_ssl_passthrough_enabled = false,
			http_redirect_code = 308,
			listen_ports = { ssl_proxy = "442", https = "443" },

			hsts = true,
			hsts_max_age = 15724800,
			hsts_include_subdomains = true,
			hsts_preload = false,

			global_throttle = {
				memcached = {
					host = "", port = 11211, connect_timeout = 50, max_idle_timeout = 10000, pool_size = 50,
				},
				status_code = 429,
			}
		})
		end

		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		configuration.prohibited_localhost_port = '10246'
		end

		ok, res = pcall(require, "balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		balancer = res
		end

		ok, res = pcall(require, "monitor")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		monitor = res
		end

		ok, res = pcall(require, "certificate")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		certificate = res
		certificate.is_ocsp_stapling_enabled = false
		end

		ok, res = pcall(require, "plugins")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		plugins = res
		end
		-- load all plugins that'll be used here
		plugins.init({  })
	}

	init_worker_by_lua_block {
		lua_ingress.init_worker()
		balancer.init_worker()

		monitor.init_worker(10000)

		plugins.run()
	}

	geoip_country       /etc/nginx/geoip/GeoIP.dat;
	geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
	geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
	geoip_proxy_recursive on;

	aio                 threads;
	aio_write           on;

	tcp_nopush          on;
	tcp_nodelay         on;

	log_subrequest      on;

	reset_timedout_connection on;

	keepalive_timeout  75s;
	keepalive_requests 100;

	client_body_temp_path           /tmp/client-body;
	fastcgi_temp_path               /tmp/fastcgi-temp;
	proxy_temp_path                 /tmp/proxy-temp;
	ajp_temp_path                   /tmp/ajp-temp;

	client_header_buffer_size       1k;
	client_header_timeout           60s;
	large_client_header_buffers     4 8k;
	client_body_buffer_size         8k;
	client_body_timeout             60s;

	http2_max_field_size            4k;
	http2_max_header_size           16k;
	http2_max_requests              1000;
	http2_max_concurrent_streams    128;

	types_hash_max_size             2048;
	server_names_hash_max_size      1024;
	server_names_hash_bucket_size   64;
	map_hash_bucket_size            64;

	proxy_headers_hash_max_size     512;
	proxy_headers_hash_bucket_size  64;

	variables_hash_bucket_size      256;
	variables_hash_max_size         2048;

	underscores_in_headers          off;
	ignore_invalid_headers          on;

	limit_req_status                503;
	limit_conn_status               503;

	include /etc/nginx/mime.types;
	default_type text/html;

	# Custom headers for response

	server_tokens off;

	more_clear_headers Server;

	# disable warnings
	uninitialized_variable_warn off;

	# Additional available variables:
	# $namespace
	# $ingress_name
	# $service_name
	# $service_port
	log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

	map $request_uri $loggable {

		default 1;
	}

	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

	error_log  /var/log/nginx/error.log notice;

	resolver 10.96.0.10 valid=30s ipv6=off;

	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;

		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		''               '';

	}

	# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
	# If no such header is provided, it can provide a random value.
	map $http_x_request_id $req_id {
		default   $http_x_request_id;

		""        $request_id;

	}

	# Create a variable that contains the literal $ character.
	# This works because the geo module will not resolve variables.
	geo $literal_dollar {
		default "$";
	}

	server_name_in_redirect off;
	port_in_redirect        off;

	ssl_protocols TLSv1.2 TLSv1.3;

	ssl_early_data off;

	# turn on session caching to drastically improve performance

	ssl_session_cache shared:SSL:10m;
	ssl_session_timeout 10m;

	# allow configuring ssl session tickets
	ssl_session_tickets off;

	# slightly reduce the time-to-first-byte
	ssl_buffer_size 4k;

	# allow configuring custom ssl ciphers
	ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
	ssl_prefer_server_ciphers on;

	ssl_ecdh_curve auto;

	# PEM sha: 4e7c1b3639ed27f15e277a7d6e0cbbdae6b81c34
	ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
	ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;

	proxy_ssl_session_reuse on;

	upstream upstream_balancer {
		### Attention!!!
		#
		# We no longer create "upstream" section for every backend.
		# Backends are handled dynamically using Lua. If you would like to debug
		# and see what backends ingress-nginx has in its memory you can
		# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
		# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
		# inspect current backends.
		#
		###

		server 0.0.0.1; # placeholder

		balancer_by_lua_block {
			balancer.balance()
		}

		keepalive 320;

		keepalive_timeout  60s;
		keepalive_requests 10000;

	}

	# Cache for internal auth checks
	proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;

	# Global filters

	## start server _
	server {
		server_name _ ;

		listen 80 default_server reuseport backlog=4096 ;
		listen 443 default_server reuseport backlog=4096 ssl http2 ;

		set $proxy_upstream_name "-";

		ssl_reject_handshake off;

		ssl_certificate_by_lua_block {
			certificate.call()
		}

		location / {

			set $namespace      "";
			set $ingress_name   "";
			set $service_name   "";
			set $service_port   "";
			set $location_path  "";
			set $global_rate_limit_exceeding n;

			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = false,
					force_no_ssl_redirect = false,
					preserve_trailing_slash = false,
					use_port_in_redirects = false,
					global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}

			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}

			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}

			body_filter_by_lua_block {
				plugins.run()
			}

			log_by_lua_block {
				balancer.log()

				monitor.call()

				plugins.run()
			}

			access_log off;

			port_in_redirect off;

			set $balancer_ewma_score -1;
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;

			set $pass_server_port    $server_port;

			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;

			set $proxy_alternative_upstream_name "";

			client_max_body_size                    1m;

			proxy_set_header Host                   $best_http_host;

			# Pass the extracted client certificate to the backend

			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;

			proxy_set_header                        Connection        $connection_upgrade;

			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;

			proxy_set_header X-Forwarded-For        $remote_addr;

			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

			proxy_set_header X-Scheme               $pass_access_scheme;

			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";

			# Custom headers to proxied server

			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;

			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;

			proxy_max_temp_file_size                1024m;

			proxy_request_buffering                 on;
			proxy_http_version                      1.1;

			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;

			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;

			proxy_pass http://upstream_balancer;

			proxy_redirect                          off;

		}

		# health checks in cloud providers require the use of port 80
		location /healthz {

			access_log off;
			return 200;
		}

		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {

			allow 127.0.0.1;

			deny all;

			access_log off;
			stub_status on;
		}

	}
	## end server _

	## start server hello-app.xiexianbin.cn
	server {
		server_name hello-app.xiexianbin.cn ;

		listen 80  ;
		listen 443  ssl http2 ;

		set $proxy_upstream_name "-";

		ssl_certificate_by_lua_block {
			certificate.call()
		}

		location / {

			set $namespace      "default";
			set $ingress_name   "ingress-hello-app";
			set $service_name   "hello-app";
			set $service_port   "8080";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;

			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					preserve_trailing_slash = false,
					use_port_in_redirects = false,
					global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}

			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}

			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}

			body_filter_by_lua_block {
				plugins.run()
			}

			log_by_lua_block {
				balancer.log()

				monitor.call()

				plugins.run()
			}

			port_in_redirect off;

			set $balancer_ewma_score -1;
			set $proxy_upstream_name "default-hello-app-8080";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;

			set $pass_server_port    $server_port;

			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;

			set $proxy_alternative_upstream_name "";

			client_max_body_size                    1m;

			proxy_set_header Host                   $best_http_host;

			# Pass the extracted client certificate to the backend

			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;

			proxy_set_header                        Connection        $connection_upgrade;

			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;

			proxy_set_header X-Forwarded-For        $remote_addr;

			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			proxy_set_header X-Forwarded-Scheme     $pass_access_scheme;

			proxy_set_header X-Scheme               $pass_access_scheme;

			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";

			# Custom headers to proxied server

			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;

			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;

			proxy_max_temp_file_size                1024m;

			proxy_request_buffering                 on;
			proxy_http_version                      1.1;

			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;

			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;

			proxy_pass http://upstream_balancer;

			proxy_redirect                          off;

		}

	}
	## end server hello-app.xiexianbin.cn

	# backend for when default-backend-service is not configured or it does not have endpoints
	server {
		listen 8181 default_server reuseport backlog=4096;

		set $proxy_upstream_name "internal";

		access_log off;

		location / {
			return 404;
		}
	}

	# default server, used for NGINX healthcheck and access to nginx stats
	server {
		listen 127.0.0.1:10246;
		set $proxy_upstream_name "internal";

		keepalive_timeout 0;
		gzip off;

		access_log off;

		location /healthz {
			return 200;
		}

		location /is-dynamic-lb-initialized {
			content_by_lua_block {
				local configuration = require("configuration")
				local backend_data = configuration.get_backends_data()
				if not backend_data then
				ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
				return
				end

				ngx.say("OK")
				ngx.exit(ngx.HTTP_OK)
			}
		}

		location /nginx_status {
			stub_status on;
		}

		location /configuration {
			client_max_body_size                    21M;
			client_body_buffer_size                 21M;
			proxy_buffering                         off;

			content_by_lua_block {
				configuration.call()
			}
		}

		location / {
			content_by_lua_block {
				ngx.exit(ngx.HTTP_NOT_FOUND)
			}
		}
	}
}

stream {
	lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";

	lua_shared_dict tcp_udp_configuration_data 5M;

	init_by_lua_block {
		collectgarbage("collect")

		-- init modules
		local ok, res

		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end

		ok, res = pcall(require, "tcp_udp_configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_configuration = res
		tcp_udp_configuration.prohibited_localhost_port = '10246'

		end

		ok, res = pcall(require, "tcp_udp_balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_balancer = res
		end
	}

	init_worker_by_lua_block {
		tcp_udp_balancer.init_worker()
	}

	lua_add_variable $proxy_upstream_name;

	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';

	access_log /var/log/nginx/access.log log_stream ;

	error_log  /var/log/nginx/error.log notice;

	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder

		balancer_by_lua_block {
			tcp_udp_balancer.balance()
		}
	}

	server {
		listen 127.0.0.1:10247;

		access_log off;

		content_by_lua_block {
			tcp_udp_configuration.call()
		}
	}

	# TCP services

	# UDP services

	# Stream Snippets

}

访问

$ curl -k https://hello-app.xiexianbin.cn
Hello, world!
Version: 1.0.0
Hostname: hello-app-dp-74f5d67978-8ghmx

$ curl -k --resolve hello-app.xiexianbin.cn:443:172.20.0.200 https://hello-app.xiexianbin.cn

参考

  1. https://github.com/kubernetes/ingress-nginx
  2. https://kubernetes.github.io/ingress-nginx/deploy/
  3. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
最新评论
加载中...
Home Archives Categories Tags Statistics