Knative service 示例

发布时间: 更新时间: 总字数:4191 阅读时间:9m 作者: IP上海 分享 网址

Knative service 增、删、改、查示例,并提供访问示例和 Revision 流量切分示例。

原理分析

在 Knative Serving 上,通过创建 Knative Service 对象来运行应用,Service 资源会触发创建如下资源:

  • 一个 Configuration 对象,它会创建一个 Revision,由 Revision 自动创建如下两个对象:
    • 一个 Deployment 对象
    • 一个 PodAutoscaler 对象
  • 一个 Route 对象,它会创建
    • 一个 Kubernetes Service 对象
    • 一个组成 Istio VirtualService 请求
      • <kservice-name>-ingress
      • <kservice-name>-mesh

命令介绍

kn service

kn-service--help
$ kn service --help
Manage Knative services

Usage:
  kn service [options]

Aliases:
  service, ksvc, services

Available Commands:
  apply       Apply a service declaration
  create      Create a service
  delete      Delete services
  describe    Show details of a service
  export      Export a service and its revisions
  import      Import a service and its revisions (experimental)
  list        List services
  update      Update a service


Use "kn <command> --help" for more information about a given command.
Use "kn options" for a list of global command-line options (applies to all commands).

创建 service

说明:

  • 官方的示例镜像太大,于是参考制作了 Golang 的镜像:xiexianbin/knative-helloworld-go:latest
  • service 创建有两种方式
    • 命令行方式
    • Yaml 文件方式

命令帮助

  • kn service create -h
kn-service-create-h
$ kn service create -h
Create a service

Usage:
  kn service create NAME --image IMAGE [options]

Examples:

  # Create a service 's0' using image knativesamples/helloworld
  kn service create s0 --image knativesamples/helloworld

  # Create a service with multiple environment variables
  kn service create s1 --env TARGET=v1 --env FROM=examples --image knativesamples/helloworld

  # Create or replace a service using --force flag
  # if service 's1' doesn't exist, it's a normal create operation
  kn service create --force s1 --image knativesamples/helloworld

  # Create or replace environment variables of service 's1' using --force flag
  kn service create --force s1 --env TARGET=force --env FROM=examples --image knativesamples/helloworld

  # Create a service with port 8080
  kn service create s2 --port 8080 --image knativesamples/helloworld

  # Create a service with port 8080 and port name h2c
  kn service create s2 --port h2c:8080 --image knativesamples/helloworld

  # Create or replace default resources of a service 's1' using --force flag
  # (earlier configured resource requests and limits will be replaced with default)
  # (earlier configured environment variables will be cleared too if any)
  kn service create --force s1 --image knativesamples/helloworld

  # Create a service with annotation
  kn service create s3 --image knativesamples/helloworld --annotation sidecar.istio.io/inject=false

  # Create a private service (that is a service with no external endpoint)
  kn service create s1 --image knativesamples/helloworld --cluster-local

  # Create a service with 250MB memory, 200m CPU requests and a GPU resource limit
  # [https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/]
  # [https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/]
  kn service create s4gpu --image knativesamples/hellocuda-go --request memory=250Mi,cpu=200m --limit nvidia.com/gpu=1

  # Create the service in offline mode instead of kubernetes cluster (Beta)
  kn service create gitopstest -n test-ns --image knativesamples/helloworld --target=/user/knfiles
  kn service create gitopstest --image knativesamples/helloworld --target=/user/knfiles/test.yaml
  kn service create gitopstest --image knativesamples/helloworld --target=/user/knfiles/test.json
...

命令行创建

创建 service

$ root@k8s-master:~/knative# kn service create h1 --port 8080 --image xiexianbin/knative-helloworld-go:latest --env TARGET="Go Sample v1"
Creating service 'h1' in namespace 'default':

  0.082s The Route is still working to reflect the latest desired specification.
  0.119s Configuration "h1" is waiting for a Revision to become ready.
  0.149s ...
 63.095s ...
 63.307s Ingress has not yet been reconciled.
 63.894s Waiting for load balancer to be ready
 64.135s Ready to serve.

Service 'h1' created to latest revision 'h1-00001' is available at URL:
http://h1.default.example.com

使用 kn 查看资源

root@k8s-master:~/knative# kn service list
NAME   URL                             LATEST     AGE     CONDITIONS   READY   REASON
h1     http://h1.default.example.com   h1-00001   3m19s   3 OK / 3     True
root@k8s-master:~/knative# kn service describe h1
Name:       h1
Namespace:  default
Age:        35m
URL:        http://h1.default.example.com

Revisions:
  100%  @latest (h1-00001) [1] (35m)
        Image:     xiexianbin/knative-helloworld-go:latest (pinned to 649546)
        Replicas:  0/0

Conditions:
  OK TYPE                   AGE REASON
  ++ Ready                  37m
  ++ ConfigurationsReady    37m
  ++ RoutesReady            37m
root@k8s-master:~/knative# kubectl get kservice
NAME   URL                             LATESTCREATED   LATESTREADY   READY   REASON
h1     http://h1.default.example.com   h1-00001        h1-00001      True
root@k8s-master:~/knative# kubectl get configurations.serving.knative.dev
NAME   LATESTCREATED   LATESTREADY   READY   REASON
h1     h1-00001        h1-00001      True
root@k8s-master:~/knative# kn revision list
NAME       SERVICE   TRAFFIC   TAGS   GENERATION   AGE   CONDITIONS   READY   REASON
h1-00001   h1        100%             1            37m   3 OK / 4     True
root@k8s-master:~/knative# kn route list
NAME   URL                             READY
h1     http://h1.default.example.com   True

查看 k8s 的资源

root@k8s-master:~/knative# kubectl get all
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP                                            PORT(S)                                              AGE
service/h1                 ExternalName   <none>           knative-local-gateway.istio-system.svc.cluster.local   80/TCP                                               2m52s
service/h1-00001           ClusterIP      10.108.172.164   <none>                                                 80/TCP,443/TCP                                       3m49s
service/h1-00001-private   ClusterIP      10.100.223.182   <none>                                                 80/TCP,443/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP   3m49s
service/kubernetes         ClusterIP      10.96.0.1        <none>                                                 443/TCP                                              44h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/h1-00001-deployment   0/0     0            0           3m49s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/h1-00001-deployment-57578c7c6d   0         0         0       3m49s

NAME                                   LATESTCREATED   LATESTREADY   READY   REASON
configuration.serving.knative.dev/h1   h1-00001        h1-00001      True

NAME                           URL                             READY   REASON
route.serving.knative.dev/h1   http://h1.default.example.com   True

NAME                                    CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON   ACTUAL REPLICAS   DESIRED REPLICAS
revision.serving.knative.dev/h1-00001   h1                               1            True             0                 0

NAME                             URL                             LATESTCREATED   LATESTREADY   READY   REASON
service.serving.knative.dev/h1   http://h1.default.example.com   h1-00001        h1-00001      True

调整 istio

root@k8s-master:~/knative# kubectl -n istio-system get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                      AGE
istio-egressgateway     ClusterIP      10.100.182.135   <none>        80/TCP,443/TCP                               45h
istio-ingressgateway    LoadBalancer   10.96.242.187    <pending>     15021:30455/TCP,80:30999/TCP,443:32324/TCP   45h
istiod                  ClusterIP      10.102.235.99    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP        45h
knative-local-gateway   ClusterIP      10.110.183.92    <none>        80/TCP                                       8h
root@k8s-master:~/knative# kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort"}}'
service/istio-ingressgateway patched
root@k8s-master:~/knative# kubectl -n istio-system get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                      AGE
istio-egressgateway     ClusterIP   10.100.182.135   <none>        80/TCP,443/TCP                               45h
istio-ingressgateway    NodePort    10.96.242.187    <none>        15021:30455/TCP,80:30999/TCP,443:32324/TCP   45h
istiod                  ClusterIP   10.102.235.99    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP        45h
knative-local-gateway   ClusterIP   10.110.183.92    <none>        80/TCP                                       8h

访问应用

  • 容器外访问
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' --resolve h1.default.example.com:30999:127.0.0.1 http://h1.default.example.com:30999
Hello Go Sample v1!
root@k8s-master:~/knative# kubectl get pod
NAME                                   READY   STATUS    RESTARTS   AGE
h1-00001-deployment-57578c7c6d-vb2qc   2/2     Running   0          6s
  • 容器内访问
root@k8s-master:~/knative# kubectl run --image=busybox --restart=Never --rm -it --command -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -q -O - http://h1.default
Hello Go Sample v1!
  • 通过 ingressgateway 和 knative-local-gateway 访问
root@k8s-master:~/knative# kubectl -n istio-system get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                      AGE
istio-egressgateway     ClusterIP   10.100.182.135   <none>        80/TCP,443/TCP                               45h
istio-ingressgateway    NodePort    10.96.242.187    <none>        15021:30455/TCP,80:30999/TCP,443:32324/TCP   45h
istiod                  ClusterIP   10.102.235.99    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP        45h
knative-local-gateway   ClusterIP   10.110.183.92    <none>        80/TCP                                       8h
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.96.242.187
Hello Go Sample v1!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello Go Sample v1!

此时,也可以查看到 pod 被自动创建,一段时间(默认 60s)不访问后,Pod 会自动释放(配置 --scale-min)。

更新应用

root@k8s-master:~/knative# kn service update h1 --port 8080 --image xiexianbin/knative-helloworld-go:latest --env TARGET="hello world"
Updating Service 'h1' in namespace 'default':

  0.041s The Configuration is still working to reflect the latest desired specification.
  8.762s Traffic is not yet migrated to the latest revision.
  8.858s Ingress has not yet been reconciled.
  8.988s Waiting for load balancer to be ready
  9.205s Ready to serve.

Service 'h1' updated to latest revision 'h1-00002' is available at URL:
http://h1.default.example.com
  • 查看 Revision 变化,新增了 h1-00002
root@k8s-master:~/knative# kn revision list
NAME       SERVICE   TRAFFIC   TAGS   GENERATION   AGE   CONDITIONS   READY   REASON
h1-00002   h1        100%             2            43s   4 OK / 4     True
h1-00001   h1                         1            58m   3 OK / 4     True
  • 访问应用,发现访问内容已经变为 Target 的信息
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello hello world!

流量切分

  • 切换流量 h1-00001=100
root@k8s-master:~/knative# kn service update h1 --traffic h1-00001=100
Updating Service 'h1' in namespace 'default':

  0.069s The Route is still working to reflect the latest desired specification.
  0.131s Ingress has not yet been reconciled.
  0.338s Waiting for load balancer to be ready
  0.442s Ready to serve.

Service 'h1' with latest revision 'h1-00002' (unchanged) is available at URL:
http://h1.default.example.com
root@k8s-master:~/knative# kn revision list
NAME       SERVICE   TRAFFIC   TAGS   GENERATION   AGE     CONDITIONS   READY   REASON
h1-00002   h1                         2            5m22s   3 OK / 4     True
h1-00001   h1        100%             1            63m     3 OK / 4     True

# 访问流量已经到 h1-00001 的版本
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello Go Sample v1!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello Go Sample v1!
  • 切换流量 h1-00001=20h1-00002=80,发现流量按照 1:4 分流,且两个 revision 的 Pod 资源均被创建
root@k8s-master:~/knative# kn service update h1 --traffic h1-00001=20 --traffic h1-00002=80
Updating Service 'h1' in namespace 'default':

  0.064s The Route is still working to reflect the latest desired specification.
  0.162s Ingress has not yet been reconciled.
  0.306s Waiting for load balancer to be ready
  0.499s Ready to serve.

Service 'h1' with latest revision 'h1-00002' (unchanged) is available at URL:
http://h1.default.example.com
root@k8s-master:~/knative# kn revision list
NAME       SERVICE   TRAFFIC   TAGS   GENERATION   AGE     CONDITIONS   READY   REASON
h1-00002   h1        80%              2            7m58s   3 OK / 4     True
h1-00001   h1        20%              1            65m     3 OK / 4     True
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello hello world!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello hello world!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello hello world!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello hello world!
root@k8s-master:~/knative# curl -H 'host: h1.default.example.com' 10.110.183.92
Hello Go Sample v1!
root@k8s-master:~/knative# kubectl get pod
NAME                                   READY   STATUS        RESTARTS   AGE
h1-00001-deployment-57578c7c6d-klrs8   2/2     Terminating   0          99s
h1-00002-deployment-69fbb846f8-p78pc   2/2     Terminating   0          116s

流量切分原理

流量通过 Istio Virtual Service 的配置进行切分

revision-update-istio-virtualservices
root@k8s-master:~/knative# kubectl get virtualservices.networking.istio.io
NAME         GATEWAYS                                                                              HOSTS                                                                                     AGE
h1-ingress   ["knative-serving/knative-ingress-gateway","knative-serving/knative-local-gateway"]   ["h1.default","h1.default.example.com","h1.default.svc","h1.default.svc.cluster.local"]   67m
h1-mesh      ["mesh"]                                                                              ["h1.default","h1.default.svc","h1.default.svc.cluster.local"]                            67m
root@k8s-master:~/knative# kubectl describe virtualservices.networking.istio.io h1-ingress
Name:         h1-ingress
Namespace:    default
Labels:       networking.internal.knative.dev/ingress=h1
              serving.knative.dev/route=h1
              serving.knative.dev/routeNamespace=default
Annotations:  networking.internal.knative.dev/rollout: {}
              networking.knative.dev/ingress.class: istio.ingress.networking.knative.dev
              serving.knative.dev/creator: kubernetes-admin
              serving.knative.dev/lastModifier: kubernetes-admin
API Version:  networking.istio.io/v1beta1
Kind:         VirtualService
Metadata:
  Creation Timestamp:  2022-09-01T08:00:50Z
  Generation:          4
  Managed Fields:
    API Version:  networking.istio.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:networking.internal.knative.dev/rollout:
          f:networking.knative.dev/ingress.class:
          f:serving.knative.dev/creator:
          f:serving.knative.dev/lastModifier:
        f:labels:
          .:
          f:networking.internal.knative.dev/ingress:
          f:serving.knative.dev/route:
          f:serving.knative.dev/routeNamespace:
        f:ownerReferences:
          .:
          k:{"uid":"dd10d69f-d4f9-4da8-8af6-cde2c76ef7e1"}:
      f:spec:
        .:
        f:gateways:
        f:hosts:
        f:http:
    Manager:    controller
    Operation:  Update
    Time:       2022-09-01T09:05:29Z
  Owner References:
    API Version:           networking.internal.knative.dev/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Ingress
    Name:                  h1
    UID:                   dd10d69f-d4f9-4da8-8af6-cde2c76ef7e1
  Resource Version:        300603
  UID:                     f3a4872c-6b32-48ec-9d47-f6af6e5d33d5
Spec:
  Gateways:
    knative-serving/knative-ingress-gateway
    knative-serving/knative-local-gateway
  Hosts:
    h1.default
    h1.default.example.com
    h1.default.svc
    h1.default.svc.cluster.local
  Http:
    Headers:
      Request:
        Set:
          K - Network - Hash:  b82ea58fa335ea6a5a3cf2bc925acbbcb3c8c0f9b67e988f3ef3f3d1afb1c7a7
    Match:
      Authority:
        Prefix:  h1.default
      Gateways:
        knative-serving/knative-local-gateway
      Headers:
        K - Network - Hash:
          Exact:  override
    Retries:
    Route:
      Destination:
        Host:  h1-00001.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00001
      Weight:                               20
      Destination:
        Host:  h1-00002.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00002
      Weight:                               80
    Match:
      Authority:
        Prefix:  h1.default
      Gateways:
        knative-serving/knative-local-gateway
    Retries:
    Route:
      Destination:
        Host:  h1-00001.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00001
      Weight:                               20
      Destination:
        Host:  h1-00002.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00002
      Weight:                               80
    Headers:
      Request:
        Set:
          K - Network - Hash:  b82ea58fa335ea6a5a3cf2bc925acbbcb3c8c0f9b67e988f3ef3f3d1afb1c7a7
    Match:
      Authority:
        Prefix:  h1.default.example.com
      Gateways:
        knative-serving/knative-ingress-gateway
      Headers:
        K - Network - Hash:
          Exact:  override
    Retries:
    Route:
      Destination:
        Host:  h1-00001.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00001
      Weight:                               20
      Destination:
        Host:  h1-00002.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00002
      Weight:                               80
    Match:
      Authority:
        Prefix:  h1.default.example.com
      Gateways:
        knative-serving/knative-ingress-gateway
    Retries:
    Route:
      Destination:
        Host:  h1-00001.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00001
      Weight:                               20
      Destination:
        Host:  h1-00002.default.svc.cluster.local
        Port:
          Number:  80
      Headers:
        Request:
          Set:
            Knative - Serving - Namespace:  default
            Knative - Serving - Revision:   h1-00002
      Weight:                               80
Events:                                     <none>

Yaml 文件创建

Knative Service 资源(简称 kservice 或 ksvc)的定义主要由两个字段组成(和 k8s 其他的类似):

  • 示例
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    name: helloworld-go
    annotations:
      # https://knative.dev/docs/serving/autoscaling/autoscaler-types/#horizontal-pod-autoscaler-hpa
      # https://knative-sample.com/20-serving/10-autoscaler-kpa/
      # 自动扩缩容的类 "kpa.autoscaling.knative.dev" or "hpa.autoscaling.knative.dev"
      autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev"
      # 自动扩缩容指标,"concurrency" ,"rps"或"cpu" cpu指标仅在具有 HPA,依赖上一个
      # https://knative.dev/development/serving/autoscaling/rps-target/
      autoscaling.knative.dev/metric: "concurrency"
      # 上一个配置为concurrency表示并发超过200时进行扩容,上一个配置为rps表示超过200请求每秒时进行扩容,上一个配置为cpu表示cpu百分比
      # autoscaling.knative.dev/target 是目标请求限制,而不是严格实施的绑定。例如,如果流量突发,可以超过软限制目标。
      autoscaling.knative.dev/target: "200"
      # 目标突发容量
      autoscaling.knative.dev/targetBurstCapacity: "200"
      # 目标利用率 允许达到硬限制的80%时进行扩建,但另外20%的流量还是发到此服务上
      autoscaling.knative.dev/targetUtilizationPercentage: "80"
      # 局部ingress入口类,会以局部为准,没有局部时用全局配置ConfigMap/config-network中的配置
      networking.knative.dev/ingress.class: <ingress-type>
      # 局部证书类入口,会以局部为准,没有局部时用全局配置ConfigMap/config-network中的配置,默认cert-manager.certificate.networking.knative.dev
      networking.knative.dev/certifcate.class: <certificate-provider>
      # 将流量逐步推出到修订版 先从1%开始,之后是与18%递增推进,是基于时间的,不与自动缩放子系统交互
      serving.knative.dev/rolloutDuration: "380s"
      # https://knative.dev/docs/serving/autoscaling/scale-bounds/
      # 每个修订版应具有的最小副本数  默认值: 0 如果启用缩放到零并使用类 KPA
      autoscaling.knative.dev/minScale: "3"
      autoscaling.knative.dev/min-scale: "0"
      # 每个修订应具有的最大副本数  0表示无限制
      autoscaling.knative.dev/maxScale: "3"
      autoscaling.knative.dev/max-scale: "3"
      # 修订在创建后必须立即达到的初始目标 创建 Revision 时,会自动选择初始比例和下限中较大的一个作为初始目标比例。默认: 1
      autoscaling.knative.dev/initialScale: "0"
      autoscaling.knative.dev/initial-scale: "3"
      # 缩减延迟指定一个时间窗口,在应用缩减决策之前,该时间窗口必须以降低的并发性通过。
      autoscaling.knative.dev/scaleDownDelay: "15m"
      # 自动缩放配置模式--稳定窗口 在缩减期间,只有在稳定窗口的整个持续时间内没有任何流量到达修订版后,才会删除最后一个副本。
      autoscaling.knative.dev/window: "40s"
      # 自动缩放配置模式--紧急窗口 评估历史数据的窗口将如何缩小例如,值为10.0意味着在恐慌模式下,窗口将是稳定窗口大小的 10%。1.0~00.0
      autoscaling.knative.dev/panicWindowPercentage: "20.0"
      # 恐慌模式阈值 定义 Autoscaler 何时从稳定模式进入恐慌模式。 流量的百分比
      autoscaling.knative.dev/panicThresholdPercentage: "150.0"
      # 确定 autoscaling 决定将 Pod 缩放到零之后,最后一个 Pod 保持活跃的最小时间 https://knative.dev/docs/serving/autoscaling/scale-to-zero/#scale-to-zero-last-pod-retention-period
      autoscaling.knative.dev/scale-to-zero-pod-retention-period: "1m5s"
      # query-proxy 资源 https://knative.dev/docs/serving/services/configure-requests-limits-services/#configure-queue-proxy-resources
      queue.sidecar.serving.knative.dev/cpu-resource-request: "1"
      queue.sidecar.serving.knative.dev/cpu-resource-limit: "2"
      queue.sidecar.serving.knative.dev/memory-resource-request: "1Gi"
      queue.sidecar.serving.knative.dev/memory-resource-limit: "2Gi"
      queue.sidecar.serving.knative.dev/ephemeral-storage-resource-request: "400Mi"
      queue.sidecar.serving.knative.dev/ephemeral-storage-resource-limit: "450Mi"
      # 表示延迟 2 分钟缩容 https://knative.dev/docs/serving/autoscaling/scale-bounds/#scale-down-delay
      autoscaling.knative.dev/scale-down-delay: "2m"
			# target-burst-capacity 定义了一个服务在不依赖请求缓冲的情况下,能够额外处理的并发请求数量,默认值200
			# target-burst-capacity: "0": 意味着系统没有任何缓冲垫。只要当前并发请求数超过了 (就绪 Pod 数 * 目标并发), Activator 就会介入。这种模式下,Activator 仅在服务从零扩容或现有 Pod 完全饱和时才会介入。
			# target-burst-capacity: "-1": 这是一个特殊值,表示无限突发容量。效果是让 Activator 始终位于请求路径上。这可以提供更平滑的负载均衡(Activator 可以感知每个 Pod 的排队深度),但会为每个请求增加一个网络跳数,可能导致延迟略微增加。
			# target-burst-capacity 控制 Knative 预先创建多少“恐慌模式”下的容量。当流量突然激增,超过常规扩容速度时,Knative 会进入恐慌模式(Panic Mode),快速扩容。
			# target-burst-capacity 可以在服务启动时或流量平稳时就创建一些额外的、不承载流量的 Pod,用于应对突发流量。
      # https://knative.dev/docs/serving/load-balancing/target-burst-capacity/
      autoscaling.knative.dev/target-burst-capacity: "200"
      serving.knative.dev/revision-timeout-seconds: "1800"
    spec:
      # ContainerConcurrency 指定了每个修订版(Revision)容器允许处理的最大并发请求数。
      # 硬限制,流量过大时,将多余的流量转到缓存层上 https://knative.dev/development/serving/autoscaling/concurrency/#hard-limit
      # https://knative.dev/docs/serving/autoscaling/concurrency/
      containerConcurrency: 50

      # Timeout issue on long requests https://github.com/knative/serving/issues/12564
      # timeout 代码实现
      # 设置 Knative serving 服务必须在多长时间内返回请求响应。如果服务未在指定时间内返回响应,则请求将结束并返回错误 504
      # TimeoutSeconds 是一个请求实例被允许响应请求的最长持续时间(以秒为单位)
      # 建议将timeoutSeconds参数应该设置为最长预期处理请求响应时间的1.2倍左右,以确保所有请求可以在容器终止之前完成
      timeoutSeconds: 600
      # ResponseStartTimeoutSeconds 是请求路由层等待容器处理请求并开始发送网络流量的最长秒数。简言之,它衡量的是从请求送达容器到容器开始响应的这段时间
      responseStartTimeoutSeconds: 600
      # IdleTimeoutSeconds 是一个请求在不从用户的应用程序接收任何数据(字节)的情况下,被允许保持打开状态的最长持续时间
      idleTimeoutSeconds: 300
      containers:
        - image: xiexianbin/knative-helloworld-go:latest
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "Go Sample v1"
    traffic:
      - latestRevision: true
        percent: 80
      # hello-world为revision名
      - revisionName: hello-world
        percent: 20
        # 通过tag进行访问,访问地址 staging-<route name>.<namespace>.<domain>
        tag: staging

# https://github.com/knative/serving/issues/12912
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: helloworld-go-servicemonitor
  labels:
    app: helloworld-go
spec:
  selector:
    matchLabels:
      function: helloworld-go
      networking.internal.knative.dev/serviceType: Private
  endpoints:
    - port: http
      scheme: http
      interval: 15s
      path: '/actuator/prometheus'
      honorLabels: true

containerConcurrency 参数说明

Based on the container concurrency configuration of the revision, the algorithm for load balancing may differ:

* if `container Concurrency is 0`, the load balancing will be random to any pod available.
* if `container Concurrency is 3 or less`, the load balancing will use the first available pod.
* if `container Concurrency is more than 3`, the load balancing will use round robin between available pods.

F&Q

启动 ksvc 警告

Warning: Kubernetes default value is insecure, Knative may default this to secure in a future release: spec.template.spec.containers[0].securityContext.allowPrivilegeEscalation, spec.template.spec.containers[0].securityContext.capabilities, spec.template.spec.containers[0].securityContext.runAsNonRoot, spec.template.spec.containers[0].securityContext.seccompProfile
  • 权限警告,可忽略

knative route Configuration “xxx” does not have any ready Revision.

没有流量,默认启动的 pod 数量是 0,请求后会自动创建 Pod

参考

  1. https://github.com/knative/serving/tree/main/sample
  2. https://github.com/knative/docs/blob/main/code-samples/serving/multi-container/service.yaml
  3. https://knative.dev/docs/samples/serving/
  4. https://knative.dev/development/samples/serving/
本文总阅读量 次 本站总访问量 次 本站总访客数
Home Archives Categories Tags Statistics