当前位置: 首页 > news >正文

Linux容器篇、第二章_01Ubuntu22 环境下 KubeSphere 容器平台高可用搭建全流程

Linux_k8s篇

欢迎来到Linux的世界,看笔记好好学多敲多打,每个人都是大神!

题目:Ubuntu22 环境下 KubeSphere 容器平台高可用搭建全流程

版本号: 1.0,0
作者: @老王要学习
日期: 2025.06.05
适用环境: Ubuntu22

文档说明

本文围绕 KubeSphere 容器平台搭建展开,适用于 Ubuntu22 环境。详细介绍了环境准备步骤,涵盖硬件、软件要求及更新、克隆、改主机名等操作。还阐述创建 Kubernetes 集群、安装 KubeSphere 流程,包含下载、配置、安装及存储卷设置等内容,助于用户完成平台搭建

环境准备

硬件要求

  • 服务器: 2核CPU、2GB内存,20GB硬盘空间
  • 网络: 确保服务器具有固定的IP地址,并且防火墙允许FTP端口(默认22端口)的通信

软件要求

  • 操作系统:Ubuntu22
  • FTP软件:SecureCRT
  • 软件包:

KubeSphereUbuntu22/IP
master01192.168.174.10
master02192.168.174.20
master03192.168.174.30
storage(单点存储)
harbor(私有镜像)
192.168.174.50

一、环境准备

1.1更新

# 进入root用户
sudo -i# 更新软件包列表
apt update# 升级已安装的软件包
apt upgrade -y# 安装防火墙
apt install ufw lrzsz -y

1.2克隆

# 修改每一天主机IP(20)
sudo -i sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.20/24]|' /etc/netplan/00-installer-config.yamlnetplan apply# 修改每一天主机IP(30)
sudo -i sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.30/24]|' /etc/netplan/00-installer-config.yamlnetplan apply# 修改每一天主机IP(50)
sudo -i sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.50/24]|' /etc/netplan/00-installer-config.yamlnetplan apply

1.3修改主机名

hostnamectl set-hostname master-10
bashhostnamectl set-hostname master-20
bashhostnamectl set-hostname master-30
bashhostnamectl set-hostname sh-50
bash

1.4关闭防火墙

# 禁用防火墙
ufw disable # 查看状态
ufw status

1.5安装相关依赖(全部主机执行)

apt install socat conntrack ebtables ipset -y
apt install lrzsz -y

二、创建Kubernetes 集群(外部持久化存储)

2.1下载kubekey-v3.1.9

# 创建一个目录
mkdir /mysvc
cd /mysvc# 下载安装包
https://github.com/kubesphere/kubekey/releases/download/v3.1.9/kubekey-v3.1.9-linux-amd64.tar.gz# 解包
tar zxf kubekey-v3.1.9-linux-amd64.tar.gz 

2.2创建配置文件安装k8s

# 查看支持安装的k8s
./kk version --show-supported-k8s# 创建一个配置文件(1.32.2)
./kk create config -f k8econfig.yml --with-kubernetes v1.32.2
Generate KubeKey config file successfully

2.3修改配置文件

cat>/mysvc/k8econfig.yml<<LW
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: laowang
spec:hosts:- {name: master-10, address: 192.168.174.10, internalAddress: 192.168.174.10, user: laowang,     password: "1"} #设置为三台主机的对应[主机名],[IP地址],[用户],[用户的密码]- {name: master-20, address: 192.168.174.20, internalAddress: 192.168.174.20, user: laowang,     password: "1"}- {name: master-30, address: 192.168.174.30, internalAddress: 192.168.174.30, user: laowang,     password: "1"}roleGroups:etcd:- master-10 #三台主机的主机名- master-20- master-30control-plane: - master-10 #三台主机的主机名- master-20- master-30worker:- master-10 #三台主机的主机名- master-20- master-30controlPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.32.2clusterName: laowang.cnautoRenewCerts: truecontainerManager: containerdetcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falsestorage:openebs:basePath: /data/openebs/localregistry:privateRegistry: "registry.cn-beijing.aliyuncs.com" #这是阿里云容器镜像服务(ACR)的北京地域镜像仓库域名namespaceOverride: "k8eio"registryMirrors: []insecureRegistries: []addons: []
LW

2.4安装k8s(3.1.9)

# 下载yamllint
apt install yamllint -y# 验证YAML文件语法(输出为空时代表文件正确)
yamllint /mysvc/k8econfig.yml# 安装k8s
export KKZONE=cn
./kk create cluster -f k8econfig.yml 
#输出如下: (开头)_   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |__/ ||___/01:00:49 UTC [GreetingsModule] Greetings
01:00:49 UTC message: [master-30]
Greetings, KubeKey!
01:00:49 UTC message: [master-10]
Greetings, KubeKey!
01:00:49 UTC message: [master-20]
Greetings, KubeKey!
01:00:49 UTC success: [master-30]
01:00:49 UTC success: [master-10]
01:00:49 UTC success: [master-20]
01:00:49 UTC [NodePreCheckModule] A pre-check on nodes
01:00:49 UTC success: [master-20]
01:00:49 UTC success: [master-30]
01:00:49 UTC success: [master-10]
01:00:49 UTC [ConfirmModule] Display confirmation form
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name      | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master-10 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
| master-20 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
| master-30 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+# 结尾输出如下:
02:11:43 UTC skipped: [master-30]
02:11:43 UTC skipped: [master-20]
02:11:43 UTC success: [master-10]
02:11:43 UTC [ConfigureKubernetesModule] Configure kubernetes
02:11:43 UTC success: [master-10]
02:11:43 UTC skipped: [master-20]
02:11:43 UTC skipped: [master-30]
02:11:43 UTC [ChownModule] Chown user $HOME/.kube dir
02:11:43 UTC success: [master-20]
02:11:43 UTC success: [master-30]
02:11:43 UTC success: [master-10]
02:11:43 UTC [AutoRenewCertsModule] Generate k8s certs renew script
02:11:43 UTC success: [master-20]
02:11:43 UTC success: [master-30]
02:11:43 UTC success: [master-10]
02:11:43 UTC [AutoRenewCertsModule] Generate k8s certs renew service
02:11:44 UTC success: [master-10]
02:11:44 UTC success: [master-20]
02:11:44 UTC success: [master-30]
02:11:44 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
02:11:44 UTC success: [master-10]
02:11:44 UTC success: [master-20]
02:11:44 UTC success: [master-30]
02:11:44 UTC [AutoRenewCertsModule] Enable k8s certs renew service
02:11:45 UTC success: [master-20]
02:11:45 UTC success: [master-10]
02:11:45 UTC success: [master-30]
02:11:45 UTC [SaveKubeConfigModule] Save kube config as a configmap
02:11:45 UTC success: [LocalHost]
02:11:45 UTC [AddonsModule] Install addons
02:11:45 UTC message: [LocalHost]
[0/0] enabled addons
02:11:45 UTC success: [LocalHost]
02:11:45 UTC Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.Please check the result using the command:kubectl get pod -Aroot@master-10:/mysvc# 

2.5查看k8s节点信息

kubectl get nodes -owide
#输出如下: 
NAME        STATUS   ROLES                  AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
master-10   Ready    control-plane,worker   116s   v1.32.2   192.168.174.10   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13
master-20   Ready    control-plane,worker   100s   v1.32.2   192.168.174.20   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13
master-30   Ready    control-plane,worker   100s   v1.32.2   192.168.174.30   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13

2.6在存储节点

# k8s自动补全功能(174.50)
cat>>~/.bashrc<<LW
source <(kubectl completion bash)
LW# 安装相关网络服务组件(174.50)
apt install nfs-common nfs-kernel-server# 安装相关网络服务组件(所以节点)
apt install nfs-common -y# 查看磁盘使用率
df -Th# 查看所有逻辑卷信息
lvdisplay 
#输出如下: --- Logical volume ---LV Path                /dev/ubuntu-vg/ubuntu-lvLV Name                ubuntu-lvVG Name                ubuntu-vgLV UUID                PIAIdL-MJYE-uXb1-ewO3-GSC5-KQtT-0qC90FLV Write Access        read/writeLV Creation host, time ubuntu-server, 2025-06-05 03:06:55 +0000LV Status              available# open                 1LV Size                10.00 GiBCurrent LE             2560Segments               1Allocation             inheritRead ahead sectors     auto- currently set to     256Block device           253:0# 将磁盘剩余空间分配给k8s
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lvSize of logical volume ubuntu-vg/ubuntu-lv changed from 10.00 GiB (2560 extents) to 18.22 GiB (4665 extents).Logical volume ubuntu-vg/ubuntu-lv successfully resized.# 再次查看磁盘信息
lvdisplay
#输出如下: LV Size                18.22 GiB# 让文件系统利用新增加的空间
/mysvc# resize2fs /dev/ubuntu-vg/ubuntu-lv
#输出如下: 
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 4776960 (4k) blocks long.# 174.10/174.20/174.30都进行如上操作
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lvresize2fs /dev/ubuntu-vg/ubuntu-lv# 创建目录(174.50)
mkdir /k8s/dynfsclass -p# NFS 服务端配置文件中添加一条共享规则
#共享目录路径    客户端权限配置(逗号分隔的选项)
cat>>/etc/exports<<LW
/k8s/dynfsclass   *(rw,sync,no_root_squash)
LW# 远程调用开机自启
systemctl enable --now nfs-server
systemctl enable --now rpcbind
reboot#  NFS(网络文件系统)服务器共享目录列表
systemctl enable --now rpcbind
showmount -e 192.168.174.50
#输出如下: 
/k8s/dynfsclass *

2.7在控制节点

# 下载 Kubernetes NFS 动态存储卷插件
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases/download/nfs-subdir-external-provisioner-4.0.18/nfs-subdir-external-provisioner-4.0.18.tgz# 解压
tar zxf nfs-subdir-external-provisioner-4.0.18.tar.gz# 修改配置文件
cat >/mysvc/nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy/deployment.yaml<<LW
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.174.50 #####storage+harbor服务器- name: NFS_PATHvalue: /k8s/dynfsclass ##### 共享目录volumes:- name: nfs-client-rootnfs:server: 192.168.174.50 ##### storage+harbor服务器path: /k8s/dynfsclass ##### 共享目录
LW

2.8拉取 NFS 存储卷插件

# 使用 Containerd(ctr) 从华为云 SWR 镜像仓库拉取 NFS 存储卷插件的容器镜像
https://docker.aityp.com/image/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2# 三台master执行
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2: resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423:                            done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3:                              done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:528677575c0b965326da0c29e21feb548e5d4c2eba8c48a611e9a50af6cf3cdc:                               done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:60775238382ed8f096b163a652f5457589739d65f1395241aba12847e7bdc2a1:                               done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 3.2 s                                                                                               total:  16.6 M (5.2 MiB/s)                                       
unpacking linux/amd64 sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423...
done: 939.146543ms# 为容器镜像打标签
ctr -n k8s.io images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2# 删除源镜像标签
ctr -n k8s.io images remove swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

2.9创建k8s部署资源

# 依据 rbac.yaml 文件里定义的内容来创建相应的资源
kubectl create -f rbac.yaml 
#输出如下: 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created# 管理应用的副本集和 Pod
kubectl create -f deployment.yaml
#输出如下: 
deployment.apps/nfs-client-provisioner created# 列出当前命名空间下的所有 Deployment 资源
kubectl get deployment.apps
#输出如下: (创建成功)
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           8m38s# 存储类用于定义集群中的存储类型和配置,允许动态创建持久卷
kubectl create -f class.yaml 
#输出如下: 
storageclass.storage.k8s.io/nfs-client created# 列出当前命名空间下的所有 Pod
kubectl get pod
#输出如下: 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7bcc898c94-mjskl   1/1     Running   0          13m# 会列出集群中所有的 StorageClass 资源
kubectl get storageclasses.storage.k8s.io 
#输出如下: 
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  4m1s

2.10安装KubeSphere

# 修改默认的镜像拉取地址
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.com.cn/main/ks-core-1.1.4.tgz --debug --wait --set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --set extension.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --set hostClusterName=laowang#输出如下: (安装成功)
NOTES:
Thank you for choosing KubeSphere Helm Chart.Please be patient and wait for several seconds for the KubeSphere deployment to complete.1. Wait for Deployment CompletionConfirm that all KubeSphere components are running by executing the following command:kubectl get pods -n kubesphere-system
2. Access the KubeSphere ConsoleOnce the deployment is complete, you can access the KubeSphere console using the following URL:  http://192.168.174.10:308803. Login to KubeSphere ConsoleUse the following credentials to log in:Account: adminPassword: P@88w0rdNOTE: It is highly recommended to change the default password immediately after the first login.
For additional information and details, please visit https://kubesphere.io.

分析:
KubeSphere Core是KubeSphere容器平台的基础版本,专注于提供核心功能集,支持企业快速搭建轻量级容器管理平台。相比完整版,Core版本更精简,更适合资源有限的环境或仅需基础功能的用户

http://www.lqws.cn/news/171667.html

相关文章:

  • 装饰模式(Decorator Pattern)重构java邮件发奖系统实战
  • k8s安装ingress-nginx
  • 以STM32H7微控制器为例,简要说明stm32h7xx_it.c的作用
  • Transformer架构解析:Encoder与Decoder核心差异、生成式解码技术详解
  • App/uni-app 离线本地存储方案有哪些?最推荐的是哪种方案?
  • MADlib —— 基于 SQL 的数据挖掘解决方案(4)—— 数据类型之矩阵
  • Tomcat全方位监控实施方案指南
  • 《基于Apache Flink的流处理》笔记
  • Docker容器化技术概述与实践
  • 【Python工具开发】k3q_arxml 简单但是非常好用的arxml编辑器,可以称为arxml杀手包
  • Java + Spring Boot + Mybatis 实现批量插入
  • window安装docker
  • C#使用MindFusion.Diagramming框架绘制流程图(1):基础类型
  • Chrome安装代理插件ZeroOmega(保姆级别)
  • 如何理解机器人课程的技术壁垒~壁垒和赚钱是两件不同的事情
  • Chrome书签的导出与导入:步骤图
  • 浏览器工作原理01 [#]Chrome架构:仅仅打开了1个页面,为什么有4个进程
  • Chrome 浏览器前端与客户端双向通信实战
  • Flink在B站的大规模云原生实践
  • 学习STC51单片机29(芯片为STC89C52RCRC)
  • 【python深度学习】Day 46 通道注意力(SE注意力)
  • Verilog编程技巧01——如何编写三段式状态机
  • caliper中的测试文件写法及其注意事项
  • 【Java后端基础 005】ThreadLocal-线程数据共享和安全
  • 江科大读写内部flash到hal库实现
  • 【Go语言基础【5】】Go module概述:项目与依赖管理
  • Tesseract配置参数详解及适用场景(PyTesseract进行OCR)
  • Spring Boot消息系统开发指南
  • 语音合成之十九 为什么对数行列式的值可以作为Flow-based模型loss?
  • 三种读写传统xls格式文件开源库libxls、xlslib、BasicExcel的比较