📝 前言
在现代云原生体系中,镜像仓库(Registry)是整个 CI/CD 流程的关键组件。
Harbor 作为 VMware 开源的企业级镜像仓库,提供了比 Docker Registry 更完善的功能,包括:
- 🔐 角色权限控制(RBAC)
- 🔎 镜像漏洞扫描
- 🔁 镜像复制(Replication)
- 🧩 OIDC / LDAP 登录认证
- 🌐 HTTPS 与证书管理
本篇将带你一步步在 Kubernetes 集群中部署 Harbor,并完成基础访问配置。
⚖️ Docker Compose vs Kubernetes 部署对比
之前有一篇介绍了通过 Docker Compose 部署 Harbor:《离线部署 Harbor 企业级镜像仓库指南》, 那么这两者之间有什么区别呢,以下是对比:
| 对比项 |
Docker Compose 部署 |
Kubernetes + Helm 部署 |
| 🏗 部署方式 |
docker-compose up -d 启动多容器服务 |
使用 Helm Chart 声明式部署 |
| ⚙️ 配置管理 |
基于 .env + docker-compose.yml 手工配置 |
通过 values.yaml 模板化配置,可参数化管理 |
| 💾 存储 |
本地路径挂载或 NFS |
使用 PVC 持久卷,支持多种 StorageClass |
| 🔁 升级/回滚 |
需手动拉取新版本、停止容器再替换 |
可通过 helm upgrade 一键升级,支持版本回滚 |
| 💡 服务发现 |
依赖宿主机端口映射 |
原生支持 Service / Ingress 负载均衡 |
| 🧩 伸缩能力 |
固定单机运行,无法水平扩展 |
支持多副本与自动调度 |
| 🔒 安全性 |
基于宿主机防火墙和 Docker 网络 |
可结合 NetworkPolicy、OIDC、Secret 管理 |
| 🔍 可观测性 |
需自行安装 Prometheus/Grafana 等 |
易于集成 K8s 生态监控(如 Metrics、Alertmanager) |
| 🧱 适用场景 |
单机或小规模内部团队使用 |
企业级、分布式、DevOps 集群环境 |
| 🚀 运维方式 |
手动维护容器状态与升级 |
可通过 GitOps 工具自动化管理(Argo CD 等) |
总的来说:
- Docker Compose 适合初期快速搭建或测试环境;
- Kubernetes + Helm 更适合长期运行、需要高可用与自动化管理的生产环境。
📋 前提条件
PS:可参看之前文章一键部署:第四部分:使用 sealos 部署集群
🚀 详细步骤
1️⃣ 安装 Helm 工具
这里我们是通过 helm 来部署harbor,因此需要先安装 helm,并且至少3.10.x及以上版本。
1 2 3 4 5
| sealos run registry.cn-shanghai.aliyuncs.com/labring/helm:v3.10.0
helm version
|
2️⃣ 安装 Local Path 存储
为了避免 Pod 重启后数据丢失问题,我们可以安装一个 Local Path 的自动化存储管理,生产环境可以直接使用阿里云的NAS存储。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
| apiVersion: v1 kind: Namespace metadata: name: local-path-storage
--- apiVersion: v1 kind: ServiceAccount metadata: name: local-path-provisioner-service-account namespace: local-path-storage
--- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: local-path-provisioner-role namespace: local-path-storage rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-path-provisioner-role rules: - apiGroups: [""] resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "patch", "update", "delete"] - apiGroups: [""] resources: ["events"] verbs: ["create", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]
--- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: local-path-provisioner-bind namespace: local-path-storage roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: local-path-provisioner-role subjects: - kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: local-path-provisioner-bind roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: local-path-provisioner-role subjects: - kind: ServiceAccount name: local-path-provisioner-service-account namespace: local-path-storage
--- apiVersion: apps/v1 kind: Deployment metadata: name: local-path-provisioner namespace: local-path-storage spec: replicas: 1 selector: matchLabels: app: local-path-provisioner template: metadata: labels: app: local-path-provisioner spec: serviceAccountName: local-path-provisioner-service-account containers: - name: local-path-provisioner image: registry.cn-hangzhou.aliyuncs.com/lusyoe/local-path-provisioner:v0.0.32 imagePullPolicy: IfNotPresent command: - local-path-provisioner - --debug - start - --config - /etc/config/config.json volumeMounts: - name: config-volume mountPath: /etc/config/ env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: CONFIG_MOUNT_PATH value: /etc/config/ volumes: - name: config-volume configMap: name: local-path-config
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-path provisioner: rancher.io/local-path volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete
--- kind: ConfigMap apiVersion: v1 metadata: name: local-path-config namespace: local-path-storage data: config.json: |- { "nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/opt/local-path-provisioner"] } ] } setup: |- #!/bin/sh set -eu mkdir -m 0777 -p "$VOL_DIR" teardown: |- #!/bin/sh set -eu rm -rf "$VOL_DIR" helperPod.yaml: |- apiVersion: v1 kind: Pod metadata: name: helper-pod spec: priorityClassName: system-node-critical tolerations: - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule containers: - name: helper-pod image: registry.cn-hangzhou.aliyuncs.com/lusyoe/busybox:latest imagePullPolicy: IfNotPresent
|
3️⃣ 安装 Ingress Controller
这里我们通过 Ingress 来暴露访问 Habror,因此需要再安装一个 Nginx Ingress Controller。
1 2 3 4 5 6
| sealos run registry.cn-shanghai.aliyuncs.com/labring/ingress-nginx:v1.11.3
kubectl edit svc -n ingress-nginx ingress-nginx-controller
|
4️⃣ 下载 Harbor Helm 包
1 2 3 4 5 6
| helm repo add harbor https://helm.goharbor.io helm repo update
helm pull harbor/harbor --untar
|
5️⃣ 修改 helm 配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
| cd harbor
cp values.yaml harbor-values.yaml
expose: tls: enabled: false certSource: none ingress: hosts: core: harbor.192.168.10.3.nip.io annotations:
externalURL: http://harbor.192.168.10.3.nip.io
persistence: persistentVolumeClaim: registry: storageClass: local-path jobservice: storageClass: local-path database: storageClass: local-path redis: storageClass: local-path
trivy: enabled: false
|
6️⃣ 部署 Harbor
1 2 3 4 5
| helm install harbor . -f harbor-values.yaml
kubectl get pods
|
7️⃣ 修改 Docker 配置
最后别忘了修改 Docker 配置,否则无法将镜像推送到 http 的 harbor 仓库。
1 2 3 4 5 6 7
| vim /etc/docker/daemon.json
{ "insecure-registries": [ "harbor.192.168.10.3.nip.io" ] }
|
8️⃣ 访问 Harbor 以及推送镜像
1 2 3 4 5 6 7 8 9
| http://harbor.192.168.10.3.nip.io
admin / Harbor12345
docker login harbor.192.168.10.3.nip.io docker tag busybox:latest harbor.192.168.10.3.nip.io/library/busybox:latest docker push harbor.192.168.10.3.nip.io/library/busybox:latest
|
9️⃣ 升级与卸载 Harbor(可选)
1 2 3 4 5
| helm upgrade harbor . -f harbor-values.yaml
helm uninstall harbor
|
✅ 总结
通过本文,我们在 Kubernetes 集群中成功部署了企业级镜像仓库 Harbor,并了解了与 Docker Compose 部署方式的核心差异。
迁移至 Kubernetes 的主要收益包括:
- 高可用与自愈能力
- 声明式配置与版本控制
- 与 DevOps 工具(Argo CD / Jenkins / GitLab CI)无缝集成