Leo Technology Stack Leo Technology Stack
首页
  • Android
  • Web
  • SpringBoot
  • 数据库
  • Docker
  • Netty
  • KubeSphere
  • Linux
  • Android Framework
  • 开源库
思维
  • 面试
  • 投资理财
  • 杂事记录
  • 索引

    • 分类
    • 标签
    • 归档
  • 开源项目

    • Advance Markdown
    • AnLibrary (opens new window)

Leo

不知名的架构师
首页
  • Android
  • Web
  • SpringBoot
  • 数据库
  • Docker
  • Netty
  • KubeSphere
  • Linux
  • Android Framework
  • 开源库
思维
  • 面试
  • 投资理财
  • 杂事记录
  • 索引

    • 分类
    • 标签
    • 归档
  • 开源项目

    • Advance Markdown
    • AnLibrary (opens new window)
  • 目录页

  • 前端

  • 后端

    • Spring

    • 数据库

    • Docker

    • NetWork

    • k8s

      • 本地虚拟机搭建k8s集群以及安装KubeSphere
        • 前置要求
        • 使用virtualbox+vagrant准备三台虚拟机
          • 下载安装
          • 环境准备
          • 启动三个虚拟机
          • 设置NAT网络
          • 开启 root 的密码访问权限
          • 设置 linux 环境
        • 所有节点安装docker、kubeadm、kubelet、kubectl
          • 安装Docker
          • 添加yum源
          • 安装kubeadm,kubelet和kubectl
          • 使用kubeadm引导集群
          • 验证集群
        • 部署dashboard (可选)
        • 安装KubeSphere
          • 安装KubeSphere前置环境
          • 安装KubeSphere
  • Linux

  • thinking

  • interview

  • notes
  • 后端
  • k8s
2022-04-26

本地虚拟机搭建k8s集群以及安装KubeSphere

# 前置要求

  1. 一台或多台机器,操作系统 CentOS7.x-86_x64
  2. 硬件配置:2GB 或更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多
  3. 集群中所有机器之间网络互通
  4. 可以访问外网,需要拉取镜像
  5. 禁止 swap 分区

# 使用virtualbox+vagrant准备三台虚拟机

# 下载安装

下载virtualbox (opens new window)

下载vagrant (opens new window)

# 环境准备

  1. 可以使用 vagrant 快速创建三个虚拟机。虚拟机启动前先设置 virtualbox 的主机网络。现全部统一为 192.168.56.1,以后所有虚拟机都是 56.x 的 ip 地址。

    主机网络设置

  2. 设置虚拟机存储目录,防止硬盘空间不足。建议选择固态硬盘。

    设置存储位置

# 启动三个虚拟机

在自定义目录创建Vagrantfile文件,并写入如下内容:

Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "centos/7"

            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"

            # 设置虚拟机的IP
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"

            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 4096
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
   end
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

# 设置NAT网络

  1. 添加新NAT网络

  2. 设置网卡

    选择网络 -> 网卡1,选择连接方式 -> NAT网络,并重新生成新的MAC地址。三台虚拟机都操作一遍。

# 开启 root 的密码访问权限

  1. vagrant脚本中设置虚拟机ip地址为192.168.56.#{99+i},所以三台虚拟机地址分别为:

    192.168.56.100
    192.168.56.101
    192.168.56.102
    
    1
    2
    3
  2. 使用命令Vagrant ssh [ip地址]进入系统

  3. su root切换root账号,密码:vagrant

  4. 修改ssh_config

    vi /etc/ssh/sshd_config

    修改 PasswordAuthentication yes/no为yes

    重启服务 service sshd restart

# 设置 linux 环境

  1. 关闭防火墙

    systemctl stop firewalld
    systemctl disable firewalld
    
    1
    2
  2. 关闭 selinux:

    sed -i 's/enforcing/disabled/' /etc/selinux/config
    setenforce 0
    
    1
    2
  3. 关闭 swap

    swapoff -a								# 临时
    
    sed -ri 's/.*swap.*/#&/' /etc/fstab		# 永久
    
    free -g 								#验证,swap 必须为 0;
    
    1
    2
    3
    4
    5
  4. 添加主机名与 IP 对应关系

    # 先用ip addr查看地址
    vi /etc/hosts
    
    10.0.2.15 k8s-node1
    10.0.2.24 k8s-node2
    10.0.2.25 k8s-node3
    
    1
    2
    3
    4
    5
    6
  5. 将桥接的 IPv4 流量传递到 iptables 的链

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    sysctl --system 	# 应用规则
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
  6. 若遇见提示是只读的文件系统,运行如下命令

    mount -o remount rw /
    
    1
  7. 同步时间(可选)

    yum install -y ntpdate
    ntpdate time.windows.com	# 同步最新时间
    
    1
    2

# 所有节点安装docker、kubeadm、kubelet、kubectl

Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。

# 安装Docker

  1. 移除以前docker相关包

    yum remove docker \
               docker-client \
               docker-client-latest \
               docker-common \
               docker-latest \
               docker-latest-logrotate \
               docker-logrotate \
               docker-engine
    
    1
    2
    3
    4
    5
    6
    7
    8
  2. 配置yum源

    yum install -y yum-utils
    
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    1
    2
    3
  3. 安装docker

    # 注意k8s与docker的版本对应
    yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6
    
    # 否则可直接安装最新版
    yum install -y docker-ce docker-ce-cli containerd.io
    
    1
    2
    3
    4
    5
  4. 配置加速

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://gm73s60x.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14

    这里额外添加了docker的生产环境核心配置cgroup

    申请阿里云镜像加速器:

  5. 开机启动

    systemctl enable docker --now
    
    1

# 添加yum源

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
1
2
3
4
5
6
7
8
9
10
11

# 安装kubeadm,kubelet和kubectl

  1. 安装

    sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
    
    1
  2. 开机启动

    sudo systemctl enable --now kubelet
    
    1
  3. 其他命令

    # 查看kubelet的状态
    systemctl status kubelet
    
    # 查看kubelet版本
    kubelet --version
    
    1
    2
    3
    4
    5

# 使用kubeadm引导集群

在主节点上执行,以k8s-node1机器为例

  1. 在Master节点上,创建并执行images.sh。下载需要的镜像

    sudo tee ./images.sh <<-'EOF'
    #!/bin/bash
    images=(
    kube-apiserver:v1.20.9
    kube-proxy:v1.20.9
    kube-controller-manager:v1.20.9
    kube-scheduler:v1.20.9
    coredns:1.7.0
    etcd:3.4.13-0
    pause:3.2
    )
    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    EOF
       
    chmod +x ./images.sh && ./images.sh
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
  2. 初始化主节点

    # apiserver-advertise-address=[主节点的ip地址]
    # control-plane-endpoint=[主节点的hostname]
    # pod-network-cidr使用默认的就可以,地址范围不能包括各机器的ip地址
    
    kubeadm init \
    --apiserver-advertise-address=10.0.2.15 \
    --control-plane-endpoint=k8s-node1 \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
    --kubernetes-version v1.20.9 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.168.0.0/16
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
  3. 执行结果如下:

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join k8s-node1:6443 --token 6cz9zt.sn3xn2sxkfttkqjd \
        --discovery-token-ca-cert-hash sha256:a976415ac1148055f7d83c19758498fa0d68dd0f2c26e79714a39d6783f9a483 \
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join k8s-node1:6443 --token 6cz9zt.sn3xn2sxkfttkqjd \
        --discovery-token-ca-cert-hash sha256:a976415ac1148055f7d83c19758498fa0d68dd0f2c26e79714a39d6783f9a483
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
  4. 设置.kube/config

    复制上面运行结果的命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    1
    2
    3
  5. 安装网络组件

    以calico为例

    curl https://docs.projectcalico.org/manifests/calico.yaml -O
    
    kubectl apply -f calico.yaml
    
    1
    2
    3

    注意

    执行kubeadm init时的--pod-network-cidr=192.168.0.0/16,如果改动了pod-network-cidr值,需更改calico.yaml,将

    # - name: CALICO_IPV4POOL_CIDR
    #   value: "192.168.0.0/16"
    
    1
    2

    打开,并设置成所改的值。

  6. 加入node节点

    在非主节点机器上运行kubeadm init的运行结果,即:

    kubeadm join k8s-node1:6443 --token 6cz9zt.sn3xn2sxkfttkqjd \
        --discovery-token-ca-cert-hash sha256:a976415ac1148055f7d83c19758498fa0d68dd0f2c26e79714a39d6783f9a483
    
    1
    2

    若令牌过期,可使用kubeadm token create --print-join-command创建新令牌。

# 验证集群

  1. 验证集群节点状态

    kubectl get nodes
    
    1

# 部署dashboard (可选)

  1. 部署

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
    
    1
  2. 设置访问端口

    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    
    # 将type: ClusterIP 改为 type: NodePort
    
    1
    2
    3
  3. 找到端口

    kubectl get svc -A |grep kubernetes-dashboard
    
    1

    运行结果如下:

    访问: https://集群任意IP:端口 ,如https://192.168.56.100:30312/,出现如下界面

  4. 创建访问账号

    #创建访问账号,准备一个yaml文件; vi dash.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19

    应用

    kubectl apply -f dash.yaml
    
    1
  5. 令牌访问

    #获取访问令牌
    kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
    
    1
    2
  6. 使用以上生成的token访问

# 安装KubeSphere

以下示例为在Kubernetes上安装KubeSphere

# 安装KubeSphere前置环境

# nfs文件系统

  1. 安装nfs-server

    在每个机器上执行:

    # 在每个机器。
    yum install -y nfs-utils
    
    1
    2

    在主节点上执行:

    # 在master 执行以下命令 
    echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
    
    # 执行以下命令,启动 nfs 服务;创建共享目录
    mkdir -p /nfs/data
    
    # 在master执行
    systemctl enable rpcbind
    systemctl enable nfs-server
    systemctl start rpcbind
    systemctl start nfs-server
    
    # 使配置生效
    exportfs -r
    
    #检查配置是否生效
    exportfs
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
  2. 配置nfs-client(选做)

    # 注意替换主节点ip
    showmount -e 10.0.2.15
    
    mkdir -p /nfs/data
    
    mount -t nfs 10.0.2.15:/nfs/data /nfs/data
    
    1
    2
    3
    4
    5
    6
  3. 配置默认存储

    配置动态供应的默认存储类,注意替换指定自己的nfs服务器地址。创建pv.yml文件,内容如下:

    ## 创建了一个存储类
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    parameters:
      archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
              # resources:
              #    limits:
              #      cpu: 10m
              #    requests:
              #      cpu: 10m
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: k8s-sigs.io/nfs-subdir-external-provisioner
                - name: NFS_SERVER
                  value: 10.0.2.15 ## 指定自己nfs服务器地址
                - name: NFS_PATH  
                  value: /nfs/data  ## nfs服务器共享的目录
          volumes:
            - name: nfs-client-root
              nfs:
                server: 10.0.2.15
                path: /nfs/data
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125

    应用

    kubectl apply -f pv.yml
    
    1

    确认配置是否生效

    kubectl get sc
    
    1

# metrics-server

集群指标监控组件

创建metrics.yml文件,内容如下:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187

应用

kubectl apply -f metrics.yml
1

# 安装KubeSphere

官网地址 (opens new window)

  1. 下载核心文件

    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
    
    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
    
    1
    2
    3
  2. 修改cluster-configuration

    在 cluster-configuration.yaml中指定我们需要开启的功能

    参照官网“启用可插拔组件”

    https://kubesphere.com.cn/docs/pluggable-components/overview/

    我这里改动的地方如下:

    etcd:
        monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
        endpointIps: 10.0.2.15  # etcd cluster EndpointIps. It can be a bunch of IPs here.
        
    common:
      redis:
        enabled: true
      openldap:
        enabled: true
        
    alerting:   
      enabled: true
    
    auditing: 
      enabled: true 
     
    devops: 
      enabled: true  
      
    events:
      enabled: true
      
    logging:  
      enabled: true 
      
    network:
      networkpolicy:
        enabled: true # Enable or disable network policies.
      ippool: 
        type: calico 
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
  3. 执行安装

    kubectl apply -f kubesphere-installer.yaml
    
    kubectl apply -f cluster-configuration.yaml
    
    1
    2
    3
  4. 查看安装进度

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    
    1
  5. 访问

    访问任意机器的 30880端口,默认账号:

    账号 : admin
    密码 : P@88w0rd
    
    1
    2

解决etcd监控证书找不到问题

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
1
编辑此页 (opens new window)
#Docker#k8s
上次更新: 2022-04-28, 11:21:32
http/https
Vim配置

← http/https Vim配置→

Theme by Leo | Copyright © 2016-2022 Leo | MIT License
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式