Operator开发

Kubebuilder和operator sdk 区别

There’s not a huge difference between the Go projects that kubebuilder and operator-sdk scaffold. Both use controller-tools and controller-runtime and both scaffold substantially similar go package structures.

Where they differ is:

  • Operator SDK also has support for Ansible and Helm operators, which make it easy to write operators without having to learn Go and if you already have experience with Ansible or Helm
  • Operator SDK includes integrations with the Operator Lifecycle Manager (OLM), which is a key component of the Operator Framework that is important to Day 2 cluster operations, like managing a live upgrade of your operator.
  • Operator SDK includes a scorecard subcommand that helps you understand if your operator follows best practices.
  • Operator SDK includes an e2e testing framework that simplifies testing your operator against an actual cluster.
  • Kubebuilder includes an envtest package that allows operator developers to run simple tests with a standalone etcd and apiserver.
  • Kubebuilder scaffolds a Makefile to assist users in operator tasks (build, test, run, code generation, etc.); Operator SDK is currently using built-in subcommands. Each has pros and cons. The SDK team will likely be migrating to a Makefile-based approach in the future.
  • Kubebuilder uses Kustomize to build deployment manifests; Operator SDK uses static files with placeholders.
  • Kubebuilder has recently improved its support for admission and CRD conversion webhooks, which has not yet made it into SDK.

The SDK and Kubebuilder teams work closely together, and we’re planning to increase our efforts to help the kubebuilder team maintain controller-tools and controller-runtime so that the entire community has access to the latest features and bug fixes.

Kubebuilder

1
2
3
4
5
6
7
8
9
$ kubebuilder init --domain test.com
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.10.0
Update dependencies:
$ go mod tidy
Next: define a resource with:
$ kubebuilder create api
1
kubebuilder edit --multigroup=true
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
.
├── Dockerfile
├── Makefile
├── PROJECT
├── config
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   └── rbac
│   ├── auth_proxy_client_clusterrole.yaml
│   ├── auth_proxy_role.yaml
│   ├── auth_proxy_role_binding.yaml
│   ├── auth_proxy_service.yaml
│   ├── kustomization.yaml
│   ├── leader_election_role.yaml
│   ├── leader_election_role_binding.yaml
│   ├── role_binding.yaml
│   └── service_account.yaml
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
└── main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
$ kubebuilder create api --group devops --version v1 --kind Cluster
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/cluster_types.go
controllers/cluster_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
go get: installing executables with 'go get' in module mode is deprecated.
To adjust and download dependencies of the current module, use 'go get -d'.
To install using requirements of the current module, use 'go install'.
To install ignoring the current module, use 'go install' with a version,
like 'go install example.com/cmd@latest'.
For more information, see https://golang.org/doc/go-get-install-deprecation
or run 'go help get' or 'go help install'.
go get: added github.com/fatih/color v1.12.0
go get: added github.com/go-logr/logr v0.4.0
go get: added github.com/gobuffalo/flect v0.2.3
go get: added github.com/gogo/protobuf v1.3.2
go get: added github.com/google/go-cmp v0.5.6
go get: added github.com/google/gofuzz v1.1.0
go get: added github.com/inconshreveable/mousetrap v1.0.0
go get: added github.com/json-iterator/go v1.1.11
go get: added github.com/mattn/go-colorable v0.1.8
go get: added github.com/mattn/go-isatty v0.0.12
go get: added github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
go get: added github.com/modern-go/reflect2 v1.0.1
go get: added github.com/spf13/cobra v1.2.1
go get: added github.com/spf13/pflag v1.0.5
go get: added golang.org/x/mod v0.4.2
go get: added golang.org/x/net v0.0.0-20210520170846-37e1c6afe023
go get: added golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
go get: added golang.org/x/text v0.3.6
go get: added golang.org/x/tools v0.1.5
go get: added golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
go get: added gopkg.in/inf.v0 v0.9.1
go get: added gopkg.in/yaml.v2 v2.4.0
go get: added gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
go get: added k8s.io/api v0.22.2
go get: added k8s.io/apiextensions-apiserver v0.22.2
go get: added k8s.io/apimachinery v0.22.2
go get: added k8s.io/klog/v2 v2.9.0
go get: added k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a
go get: added sigs.k8s.io/controller-tools v0.7.0
go get: added sigs.k8s.io/structured-merge-diff/v4 v4.1.2
go get: added sigs.k8s.io/yaml v1.2.0
/Users/flynn/go/src/test.com/cluster-manager/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests

目录结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
.
├── Dockerfile
├── Makefile
├── PROJECT
├── api
│   └── v1
│   ├── cluster_types.go
│   ├── groupversion_info.go
│   └── zz_generated.deepcopy.go
├── bin
│   └── controller-gen
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │   ├── cainjection_in_clusters.yaml
│   │   └── webhook_in_clusters.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── cluster_editor_role.yaml
│   │   ├── cluster_viewer_role.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── role_binding.yaml
│   │   └── service_account.yaml
│   └── samples
│   └── devops_v1_cluster.yaml
├── controllers
│   ├── cluster_controller.go
│   └── suite_test.go
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
└── main.go

安装CRD

1
make install

定义yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
apiVersion: devops.k8s.io/v1
kind: Cluster
metadata:
name: cluster-01
namespace: xiamen-area
spec:
osType: centos # 操作系统类型
criType: containerd # cri 类型, 目前支持 containerd, 废弃 docker 支持
version: v1.19.6 # kubernetes version
eth: ens34 # 网卡名称, 默认 eth0
clusterCIDR: 172.16.101.0/24 # 集群 pod cidr
serviceCIDR: 172.16.201.0/24 # 集群 service cidr
localAPIEndpoint:
advertiseAddress: 183.131.145.84
bindPort: 6443
network:
ipvs: true # kube-proxy 类型,支持 ipvs、iptables
internalLB: true
enableMasterSchedule: true # master 结点是否调度
ha:
thirdParty:
vip: "172.16.18.243" # 集群 apiserver vip
vport: 6443
hooks:
cniInstall: flannel # 集群 cni 插件
masterMachines: # 集群 master 结点
- ip: 172.16.18.17
port: 22
username: root
password: "123456"
- ip: 172.16.18.18
port: 22
username: root
password: "123456"
- ip: 172.16.18.19
port: 22
username: root
password: "123456"
registry: # 镜像仓库 mirrors
mirrors:
"docker.io":
endpoints:
- "https://yqdzw3p0.mirror.aliyuncs.com"
"quay.io":
endpoints:
- "https://quay.mirrors.ustc.edu.cn"
upgrade:
mode: Manual # 集群升级模式,支持 Auto、Manual
strategy: # 集群升级策略
maxUnready: 0 # 需要增加结点轮转升级
drainNodeBeforeUpgrade: true # 升级结点前 drain 掉

安装Operator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
[root@master kok-operator]# helm upgrade --install kok-operator --create-namespace --namespace kok-system --debug ./charts/kok-operator
history.go:56: [debug] getting history for release kok-operator
Release "kok-operator" does not exist. Installing it now.
install.go:178: [debug] Original chart version: ""
install.go:199: [debug] CHART PATH: /root/kok-operator/charts/kok-operator

client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
install.go:165: [debug] Clearing discovery cache
wait.go:48: [debug] beginning wait for 4 resources with timeout of 1m0s
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 5 resource(s)
NAME: kok-operator
LAST DEPLOYED: Sun Nov 21 21:38:08 2021
NAMESPACE: kok-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
args:
imagesPrefix: docker.io/wtxue
fullnameOverride: ""
image:
pullPolicy: Always
repository: docker.io/wtxue/kok-operator
tag: v0.2.0-dev2
imagePullSecrets: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
rbac:
create: true
name: ""
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
tolerations: []

HOOKS:
MANIFEST:
---
# Source: kok-operator/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kok-operator
namespace: kok-system
labels:
helm.sh/chart: kok-operator-v0.2.0
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
app.kubernetes.io/version: "v0.2.0"
app.kubernetes.io/managed-by: Helm
---
# Source: kok-operator/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kok-operator
labels:
helm.sh/chart: kok-operator-v0.2.0
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
app.kubernetes.io/version: "v0.2.0"
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources: ["*"]
verbs: ["*"]
- apiGroups:
- "apps"
- "apiextensions.k8s.io"
- "autoscaling"
resources: ["*"]
verbs: ["*"]
- apiGroups: ["devops.fake.io","workload.fake.io"]
resources: ["*"]
verbs: ["*"]
---
# Source: kok-operator/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kok-operator
labels:
helm.sh/chart: kok-operator-v0.2.0
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
app.kubernetes.io/version: "v0.2.0"
app.kubernetes.io/managed-by: Helm
subjects:
- kind: ServiceAccount
name: kok-operator
namespace: kok-system
roleRef:
kind: ClusterRole
name: kok-operator
apiGroup: rbac.authorization.k8s.io
---
# Source: kok-operator/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: kok-operator
namespace: kok-system
labels:
helm.sh/chart: kok-operator-v0.2.0
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
app.kubernetes.io/version: "v0.2.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
---
# Source: kok-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kok-operator
namespace: kok-system
labels:
helm.sh/chart: kok-operator-v0.2.0
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
app.kubernetes.io/version: "v0.2.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
template:
metadata:
labels:
app.kubernetes.io/name: kok-operator
app.kubernetes.io/instance: kok-operator
spec:
serviceAccountName: kok-operator
securityContext:
{}
containers:
- name: kok-operator
securityContext:
{}
image: "docker.io/wtxue/kok-operator:v0.2.0-dev2"
imagePullPolicy: Always
command:
- kok-operator
args:
- ctrl
- -v
- "4"
- --images-prefix=docker.io/wtxue
ports:
- name: http
containerPort: 8090
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{}

NOTES:

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!