您正在查看 Kubernetes 版本的文档: v1.20

Kubernetes v1.20 版本的文档已不再维护。您现在看到的版本来自于一份静态的快照。如需查阅最新文档,请点击 最新版本。

设置 Konnectivity 服务

Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。

准备开始

你必须拥有一个 Kubernetes 的集群,同时你的 Kubernetes 集群必须带有 kubectl 命令行工具。 如果你还没有集群,你可以通过 Minikube 构建一 个你自己的集群,或者你可以使用下面任意一个 Kubernetes 工具构建:

配置 Konnectivity 服务

接下来的步骤需要出口配置,比如:

apiVersion: apiserver.k8s.io/v1beta1
kind: EgressSelectorConfiguration
egressSelections:
# Since we want to control the egress traffic to the cluster, we use the
# "cluster" as the name. Other supported values are "etcd", and "master".
- name: cluster
  connection:
    # This controls the protocol between the API Server and the Konnectivity
    # server. Supported values are "GRPC" and "HTTPConnect". There is no
    # end user visible difference between the two modes. You need to set the
    # Konnectivity server to work in the same mode.
    proxyProtocol: GRPC
    transport:
      # This controls what transport the API Server uses to communicate with the
      # Konnectivity server. UDS is recommended if the Konnectivity server
      # locates on the same machine as the API Server. You need to configure the
      # Konnectivity server to listen on the same UDS socket.
      # The other supported transport is "tcp". You will need to set up TLS 
      # config to secure the TCP transport.
      uds:
        udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket

你需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:

  1. 确保 ServiceAccountTokenVolumeProjection 特性门控 被启用。你可以通过为 kube-apiserver 提供以下标志启用 服务账号令牌卷保护

    --service-account-issuer=api
    --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    --api-audiences=system:konnectivity-server
    
  2. 创建一个出站流量配置文件,比如 admin/konnectivity/egress-selector-configuration.yaml

  3. 将 API 服务器的 --egress-selector-config-file 参数设置为你的 API 服务器的 离站流量配置文件路径。

  4. 如果你在使用 UDS 连接,须将卷配置添加到 kube-apiserver:

    spec:
      containers:
        volumeMounts:
        - name: konnectivity-uds
          mountPath: /etc/kubernetes/konnectivity-server
          readOnly: false
      volumes:
      - name: konnectivity-uds
        hostPath:
          path: /etc/kubernetes/konnectivity-server
          type: DirectoryOrCreate
    

为 konnectivity-server 生成或者取得证书和 kubeconfig 文件。 例如,你可以使用 OpenSSL 命令行工具,基于存放在某控制面主机上 /etc/kubernetes/pki/ca.crt 文件中的集群 CA 证书来 发放一个 X.509 证书,

openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key -out konnectivity.csr
openssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256
SERVER=$(kubectl config view -o jsonpath='{.clusters..server}')
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-cluster kubernetes --server "$SERVER" --certificate-authority /etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-context system:konnectivity-server@kubernetes --cluster kubernetes --user system:konnectivity-server
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config use-context system:konnectivity-server@kubernetes
rm -f konnectivity.crt konnectivity.key konnectivity.csr

接下来,你需要部署 Konnectivity 服务器和代理。 kubernetes-sigs/apiserver-network-proxy 是一个参考实现。

在控制面节点上部署 Konnectivity 服务。 下面提供的 konnectivity-server.yaml 配置清单假定在你的集群中 Kubernetes 组件都是部署为静态 Pod 的。 如果不是,你可以将 Konnectivity 服务部署为 DaemonSet。

apiVersion: v1
kind: Pod
metadata:
  name: konnectivity-server
  namespace: kube-system
spec:
  priorityClassName: system-cluster-critical
  hostNetwork: true
  containers:
  - name: konnectivity-server-container
    image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
    command: ["/proxy-server"]
    args: [
            "--log-file=/var/log/konnectivity-server.log",
            "--logtostderr=false",
            "--log-file-max-size=0",
            # This needs to be consistent with the value set in egressSelectorConfiguration.
            "--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
            # The following two lines assume the Konnectivity server is
            # deployed on the same machine as the apiserver, and the certs and
            # key of the API Server are at the specified location.
            "--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
            "--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
            # This needs to be consistent with the value set in egressSelectorConfiguration.
            "--mode=grpc",
            "--server-port=0",
            "--agent-port=8132",
            "--admin-port=8133",
            "--agent-namespace=kube-system",
            "--agent-service-account=konnectivity-agent",
            "--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
            "--authentication-audience=system:konnectivity-server"
            ]
    livenessProbe:
      httpGet:
        scheme: HTTP
        host: 127.0.0.1
        port: 8133
        path: /healthz
      initialDelaySeconds: 30
      timeoutSeconds: 60
    ports:
    - name: agentport
      containerPort: 8132
      hostPort: 8132
    - name: adminport
      containerPort: 8133
      hostPort: 8133
    volumeMounts:
    - name: varlogkonnectivityserver
      mountPath: /var/log/konnectivity-server.log
      readOnly: false
    - name: pki
      mountPath: /etc/srv/kubernetes/pki
      readOnly: true
    - name: konnectivity-uds
      mountPath: /etc/srv/kubernetes/konnectivity-server
      readOnly: false
  volumes:
  - name: varlogkonnectivityserver
    hostPath:
      path: /var/log/konnectivity-server.log
      type: FileOrCreate
  - name: pki
    hostPath:
      path: /etc/srv/kubernetes/pki
  - name: konnectivity-uds
    hostPath:
      path: /etc/srv/kubernetes/konnectivity-server
      type: DirectoryOrCreate

在你的集群中部署 Konnectivity 代理:

apiVersion: apps/v1
# Alternatively, you can deploy the agents as Deployments. It is not necessary
# to have an agent on each node.
kind: DaemonSet
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: konnectivity-agent
  namespace: kube-system
  name: konnectivity-agent
spec:
  selector:
    matchLabels:
      k8s-app: konnectivity-agent
  template:
    metadata:
      labels:
        k8s-app: konnectivity-agent
    spec:
      priorityClassName: system-cluster-critical
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
        - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
          name: konnectivity-agent
          command: ["/proxy-agent"]
          args: [
                  "--logtostderr=true",
                  "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
                  # Since the konnectivity server runs with hostNetwork=true,
                  # this is the IP address of the master machine.
                  "--proxy-server-host=35.225.206.7",
                  "--proxy-server-port=8132",
                  "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
                  ]
          volumeMounts:
            - mountPath: /var/run/secrets/tokens
              name: konnectivity-agent-token
          livenessProbe:
            httpGet:
              port: 8093
              path: /healthz
            initialDelaySeconds: 15
            timeoutSeconds: 15
      serviceAccountName: konnectivity-agent
      volumes:
        - name: konnectivity-agent-token
          projected:
            sources:
              - serviceAccountToken:
                  path: konnectivity-agent-token
                  audience: system:konnectivity-server

最后,如果你的集群启用了 RBAC,请创建相关的 RBAC 规则:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:konnectivity-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: system:konnectivity-server
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: konnectivity-agent
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
最后修改 January 25, 2021 at 5:52 PM PST: [zh] sync Misc Batch 1 (437c56d08e)