K8s:通過 Velero 實現(xiàn)集群備份和恢復
分享一個k8s集群容災備份恢復的開源工具 Velero
博文內容涉及:
Velero 的簡單介紹
Velero 安裝下載
備份恢復 Demo,以及容災測試 Demo
恢復失敗情況分析
理解不足小伙伴幫忙指正
我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------赫爾曼·黑塞《德米安》
Velero 的簡單介紹
Velero 是一個 vmware 開源的工具,用于 k8s 安全備份和恢復、執(zhí)行災難恢復以及遷移 Kubernetes 集群資源和持久卷。
Velero 可以做的:
備份集群并在丟失時恢復。
將集群資源遷移到其他集群。
將您的生產集群復制到開發(fā)和測試集群。
Velero 包括兩部分:
在集群上運行的服務器(Velero 服務器)
在本地運行的命令行客戶端(velero cli)
什么時候使用 Velero 代替 etcd 的內置備份/恢復是合適的?
關于 Velero 和 etcd 的快照備份如何選擇?
我個人認為
etcd 快照備份適用于比較嚴重的集群災難。比如所有 etcd 集群所有節(jié)點宕機,快照文件丟失損壞的情況。k8s 集群掛掉的情況, etcd 備份恢復 是一種快速且成功率高的恢復方式,Velero 的恢復需要依賴其他組件,并且需要保證集群是存活的。至少kube-apiseever,etcd是活著。
Velero 適用于集群遷移,k8s 子集備份恢復,比如基于命名空間備份。某個命名空間誤刪,且 YAML 文件沒有備份,那么可以 Velero 快速恢復。涉及多API資源對象 的系統(tǒng)升級,可以做升級前備份,升級失敗通過 Velero 快速恢復。
Velero github 上面的解答:
Etcd 的備份/恢復工具非常適合從單個 etcd 集群中的數(shù)據丟失中恢復。例如,在升級 etcd istelf 之前備份 etcd 是個好主意。對于更復雜的 Kubernetes 集群備份和恢復管理,我們認為 Velero 通常是更好的方法。它使您能夠扔掉不穩(wěn)定的集群,并將您的 Kubernetes 資源和數(shù)據恢復到新的集群中,而僅通過備份和恢復 etcd 無法輕松做到這一點。
Velero 有用的案例示例:
您無權訪問 etcd(例如,您在 GKE 上運行)
備份 Kubernetes 資源和持久卷狀態(tài)
集群遷移
備份 Kubernetes 資源的子集
備份存儲在多個 etcd 集群中的 Kubernetes 資源(例如,如果您運行自定義 apiserver)
災備恢復原理簡單介紹
這部分建議小伙伴官網了解 這里簡單介紹
每個 Velero 操作——按需備份、計劃備份、恢復——都是自定義資源,使用 Kubernetes 自定義資源定義 (CRD) 定義并存儲在 etcd 中。Velero 還包括處理自定義資源以執(zhí)行備份、恢復和所有相關操作的控制器。
備份工作流程
當你運行時 velero backup create test-backup
Velero 客戶端調用 Kubernetes API 服務器來創(chuàng)建一個 Backup 對象。
BackupController 通知新對象Backup并執(zhí)行驗證。
BackupController 開始備份過程。它通過查詢 API 服務器的資源來收集要備份的數(shù)據。
調用對象存儲服務(BackupController 例如 AWS S3)以上傳備份文件。
默認情況下,velero backup create 為任何持久卷制作磁盤快照。您可以通過指定額外的標志來調整快照。運行 velero backup create --help 以查看可用標志??梢允褂眠x項禁用快照 --snapshot-volumes=false。
恢復工作流程
當你運行時 velero restore create:
Velero 客戶端調用 Kubernetes API 服務器來創(chuàng)建一個 Restore 對象。
RestoreController 通知新的 Restore 對象并執(zhí)行驗證。
從對象存儲服務中 RestoreController 獲取備份信息。然后它對備份的資源進行一些預處理,以確保這些資源可以在新集群上運行。例如,使用 備份的 API 版本來驗證還原資源是否可以在目標集群上運行。
RestoreController 啟動還原過程,一次還原每個符合條件的資源。
默認情況下,Velero 執(zhí)行非破壞性恢復,這意味著它不會刪除目標集群上的任何數(shù)據。如果備份中的資源已存在于目標集群中,Velero 將跳過該資源。您可以將 Velero 配置為使用更新策略,而不是使用 --existing-resource-policy 恢復標志。當此標志設置為 時 update,Velero 將嘗試更新目標集群中的現(xiàn)有資源以匹配備份中的資源。
安裝下載
集群兼容性問題查看:
https://github.com/vmware-tanzu/velero#velero-compatibility-matrix
當前的集群環(huán)境
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl version --output=json
{
"clientVersion": {
"major": "1",
"minor": "25",
"gitVersion": "v1.25.1",
"gitCommit": "e4d4e1ab7cf1bf15273ef97303551b279f0920a9",
"gitTreeState": "clean",
"buildDate": "2022-09-14T19:49:27Z",
"goVersion": "go1.19.1",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.7",
"serverVersion": {
"major": "1",
"minor": "25",
"gitVersion": "v1.25.1",
"gitCommit": "e4d4e1ab7cf1bf15273ef97303551b279f0920a9",
"gitTreeState": "clean",
"buildDate": "2022-09-14T19:42:30Z",
"goVersion": "go1.19.1",
"compiler": "gc",
"platform": "linux/amd64"
}
}
安裝文件下載:
https://github.com/vmware-tanzu/velero/releases/tag/v1.10.1-rc.1
https://github.com/vmware-tanzu/velero/releases/download/v1.10.1-rc.1/velero-v1.10.1-rc.1-linux-amd64.tar.gz
客戶端
客戶端安裝:
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero]
└─$wget --no-check-certificate https://github.com/vmware-tanzu/velero/releases/download/v1.10.1-rc.1/velero-v1.10.1-rc.1-linux-amd64.tar.gz
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero]
└─$ls
velero-v1.10.1-rc.1-linux-amd64.tar.gz
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero]
└─$tar -zxvf velero-v1.10.1-rc.1-linux-amd64.tar.gz
velero-v1.10.1-rc.1-linux-amd64/LICENSE
velero-v1.10.1-rc.1-linux-amd64/examples/.DS_Store
velero-v1.10.1-rc.1-linux-amd64/examples/README.md
velero-v1.10.1-rc.1-linux-amd64/examples/minio
velero-v1.10.1-rc.1-linux-amd64/examples/minio/00-minio-deployment.yaml
velero-v1.10.1-rc.1-linux-amd64/examples/nginx-app
velero-v1.10.1-rc.1-linux-amd64/examples/nginx-app/README.md
velero-v1.10.1-rc.1-linux-amd64/examples/nginx-app/base.yaml
velero-v1.10.1-rc.1-linux-amd64/examples/nginx-app/with-pv.yaml
velero-v1.10.1-rc.1-linux-amd64/velero
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero]
└─$cd velero-v1.10.1-rc.1-linux-amd64/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$cp velero /usr/local/bin/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero version
Client:
Version: v1.10.1-rc.1
Git commit: e4d2a83917cd848e5f4e6ebc445fd3d262de10fa
<error getting server version: no matches for kind "ServerStatusRequest" in version "velero.io/v1">
配置命令補齊
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero completion bash >/etc/bash_completion.d/velero
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero client config set colorized=false
服務端安裝
在安裝 服務端的同時,需要安裝一個 存放 備份數(shù)據文件的對象存儲系統(tǒng) Minio
credentials-velero 在您的 Velero 目錄中創(chuàng)建特定于 Velero 的憑據文件,用于連接 minio
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$vim credentials-velero
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$cat credentials-velero
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
啟動服務器和本地存儲服務。
本地存儲服務部署
下面的 YAML 文件在 Velero 目錄中,在上面的客戶端的安裝包里,解壓出來就可以看到這個 Yaml 文件
這個 YAML 文件 用于部署一個從集群內訪問的 Minio 實例。并且啟動一個 Job 在 Minion 中建立備份需要的桶,需要在集群外部公開 Minio 服務。需要外部訪問才能訪問日志和運行 velero describe 命令。
修改下 yaml 文件,這里主要修改 Service 為 NodePort。并且把 Minion 的 控制臺訪問 IP 涉及為靜態(tài)。
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$cat examples/minio/00-minio-deployment.yaml
# Copyright 2017 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: quay.io/minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --console-address=:9090
- --config-dir=/config
env:
- name: MINIO_ROOT_USER
value: "minio"
- name: MINIO_ROOT_PASSWORD
value: "minio123"
ports:
- containerPort: 9000
- containerPort: 9090
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9000
name: api
targetPort: 9000
protocol: TCP
- port: 9099
name: console
targetPort: 9090
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
注意:Minio yaml 提供的示例使用“empty dir”。您的節(jié)點需要有足夠的可用空間來存儲正在備份的數(shù)據以及 1GB 的可用空間。如果節(jié)點沒有足夠的空間,您可以修改示例 yaml 以使用 Persistent Volume 而不是“empty dir”
bucket:你在 minio 中創(chuàng)建的 bucketname
backup-location-config: 把 xxx.xxx.xxx.xxx 改成你 minio 服務器的 ip 地址。
集群中部署 Velero
部署命令
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
如果為私倉,可以導出 YAML 文件調整在應用。
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
--dry-run -o yaml > velero_deploy.yaml
應用部署
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$kubectl apply -f velero_deploy.yaml
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client
..........
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
Velero is installed! ? Use 'kubectl logs deployment/velero -n velero' to view the status.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$
部署完成的 job 會自動新建 備用文件上傳用的桶
備份
全量備份,部分備份
普通備份
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero backup create velero-demo
Backup request "velero-demo" submitted successfully.
Run `velero backup describe velero-demo` or `velero backup logs velero-demo` for more details.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero get backup velero-demo
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
velero-demo InProgress 0 0 2023-01-28 22:18:45 +0800 CST 29d default <none>
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$
查看備份信息
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero get backup velero-demo
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
velero-demo Completed 0 0 2023-01-28 22:18:45 +0800 CST 29d default <none>
定時備份
定時備份,每天午夜備份一次。
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero schedule create k8s-backup --schedule="@daily"
Schedule "k8s-backup" created successfully.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero get schedule
NAME STATUS CREATED SCHEDULE BACKUP TTL LAST BACKUP SELECTOR PAUSED
k8s-backup Enabled 2023-01-29 00:11:03 +0800 CST @daily 0s n/a <none> false
恢復
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero restore create --from-backup velero-demo
Restore request "velero-demo-20230129001615" submitted successfully.
Run `velero restore describe velero-demo-20230129001615` or `velero restore logs velero-demo-20230129001615` for more details.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero get restore
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230129001615 velero-demo InProgress 2023-01-29 00:16:15 +0800 CST <nil> 0 0 2023-01-29 00:16:15 +0800 CST <none>
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$velero get restore
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230129001615 velero-demo Completed 2023-01-29 00:16:15 +0800 CST 2023-01-29 00:17:20 +0800 CST 0 135 2023-01-29 00:16:15 +0800 CST <none>
┌──[root@vms100.liruilongs.github.io]-[~/ansible/velero/velero-v1.10.1-rc.1-linux-amd64]
└─$
容災測試
刪除一個命名空間測試
當前命令空間資源
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl-ketall -n cadvisor
W0129 00:34:28.299699 126128 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
W0129 00:34:28.354853 126128 warnings.go:70] metallb.io v1beta1 AddressPool is deprecated, consider using IPAddressPool
NAME NAMESPACE AGE
configmap/kube-root-ca.crt cadvisor 2d4h
pod/cadvisor-5v7hl cadvisor 2d4h
pod/cadvisor-7dnmk cadvisor 2d4h
pod/cadvisor-7l4zf cadvisor 2d4h
pod/cadvisor-dj6dm cadvisor 2d4h
pod/cadvisor-sjpq8 cadvisor 2d4h
serviceaccount/cadvisor cadvisor 2d4h
serviceaccount/default cadvisor 2d4h
controllerrevision.apps/cadvisor-6cc5c5c9cc cadvisor 2d4h
daemonset.apps/cadvisor cadvisor 2d4h
刪除命名空間
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl delete ns cadvisor
namespace "cadvisor" deleted
^C┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl delete ns cadvisor --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (NotFound): namespaces "cadvisor" not found
查看命名空間資源
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl-ketall -n cadvisor
W0129 00:35:25.548656 127598 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
W0129 00:35:25.581030 127598 warnings.go:70] metallb.io v1beta1 AddressPool is deprecated, consider using IPAddressPool
No resources found.
使用上面的備份恢復剛才刪除的命名空間
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$velero restore create --from-backup velero-demo
Restore request "velero-demo-20230129003541" submitted successfully.
Run `velero restore describe velero-demo-20230129003541` or `velero restore logs velero-demo-20230129003541` for more details.
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$velero get restore
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230129001615 velero-demo Completed 2023-01-29 00:16:15 +0800 CST 2023-01-29 00:17:20 +0800 CST 0 135 2023-01-29 00:16:15 +0800 CST <none>
velero-demo-20230129003541 velero-demo InProgress 2023-01-29 00:35:41 +0800 CST <nil> 0 0 2023-01-29 00:35:41 +0800 CST <none>
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$velero get restore
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230129001615 velero-demo Completed 2023-01-29 00:16:15 +0800 CST 2023-01-29 00:17:20 +0800 CST 0 135 2023-01-29 00:16:15 +0800 CST <none>
velero-demo-20230129003541 velero-demo Completed 2023-01-29 00:35:41 +0800 CST 2023-01-29 00:36:46 +0800 CST 0 135 2023-01-29 00:35:41 +0800 CST <none>
確定命名空間資源恢復
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl-ketall -n cadvisor
W0129 00:37:29.787766 130766 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
W0129 00:37:29.819111 130766 warnings.go:70] metallb.io v1beta1 AddressPool is deprecated, consider using IPAddressPool
NAME NAMESPACE AGE
configmap/kube-root-ca.crt cadvisor 94s
pod/cadvisor-5v7hl cadvisor 87s
pod/cadvisor-7dnmk cadvisor 87s
pod/cadvisor-7l4zf cadvisor 87s
pod/cadvisor-dj6dm cadvisor 87s
pod/cadvisor-sjpq8 cadvisor 87s
serviceaccount/cadvisor cadvisor 88s
serviceaccount/default cadvisor 94s
controllerrevision.apps/cadvisor-6cc5c5c9cc cadvisor 63s
daemonset.apps/cadvisor cadvisor 63s
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$kubectl get all -n cadvisor
Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/cadvisor-5v7hl 1/1 Running 0 2m50s
pod/cadvisor-7dnmk 1/1 Running 0 2m50s
pod/cadvisor-7l4zf 1/1 Running 0 2m50s
pod/cadvisor-dj6dm 1/1 Running 0 2m50s
pod/cadvisor-sjpq8 1/1 Running 0 2m50s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/cadvisor 5 5 5 5 5 <none> 2m26s
┌──[root@vms100.liruilongs.github.io]-[~/back]
└─$
恢復失敗情況分析
這里需要說明一點過,如果當前有命令空間發(fā)生了刪除,但是你中斷了它,類似下面這樣,kubevirt 通過命令行發(fā)生的刪除操作,但是它的刪除沒有完成?;蛘吣氵M行了一些其他的操作。重復的刪除創(chuàng)建 API 資源,導致的某些問題希望恢復操作之前的集群狀態(tài)
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$kubectl get ns
NAME STATUS AGE
cadvisor Active 39h
default Active 3d20h
ingress-nginx Active 3d20h
kube-node-lease Active 3d20h
kube-public Active 3d20h
kube-system Active 3d20h
kubevirt Terminating 3d20h
local-path-storage Active 3d20h
metallb-system Active 3d20h
velero Active 40h
這個時候,如果使用 velero 發(fā)生 備份還原操作??梢詴ㄔ谙旅娴膬蓚€狀態(tài) InProgress 或者 New
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$velero get restore
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230130105328 velero-demo InProgress 2023-01-30 10:53:28 +0800 CST <nil> 0 0 2023-01-30 10:53:28 +0800 CST <none>
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$velero get restores
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
velero-demo-20230130161258 velero-demo New <nil> <nil> 0 0 2023-01-30 16:12:58 +0800 CST <none>
如果長時間沒有變化,需要把通過腳本把命名空間徹底刪除,然后還原工作才可以正常進行
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$velero get restores
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
.............
velero-demo-20230130161258 velero-demo Completed 2023-01-30 20:53:58 +0800 CST 2023-01-30 20:55:20 +0800 CST 0 164 2023-01-30 16:12:58 +0800 CST <none>
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$date
2023年 01月 30日 星期一 21:02:49 CST
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubevirt]
└─$
┌──[root@vms100.liruilongs.github.io]-[~/ansible/k8s_shell_secript]
└─$cat delete_namespace.sh
#!/bin/bash
coproc kubectl proxy --port=30990 &
if [ $# -eq 0 ] ; then
echo "后面加上你所要刪除的ns."
exit 1
fi
kubectl get namespace $1 -o json > logging.json
sed -i '/"finalizers"/{n;d}' logging.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @logging.json http://127.0.0.1:30990/api/v1/namespaces/${1}/finalize
kill %1
┌──[root@vms100.liruilongs.github.io]-[~/ansible/k8s_shell_secript]
└─$sh delete_namespace.sh kubevirt
┌──[root@vms100.liruilongs.github.io]-[~/ansible/k8s_shell_secript]
└─$ls
delete_namespace.sh logging.json
博文部分內容參考
文中涉及參考鏈接內容版權歸原作者所有,如有侵權請告知
作者:山河已無恙
歡迎關注微信公眾號 :山河已無恙