几个月前,我正在构建一个涉及以编程方式在 Kubernetes 上启动和监控作业的系统,基本上我正在编写一种控制器类型的服务,它将接受来自 API 的请求并在 Kubernetes 集群上安排作业。我使用的是 Go,Go 中只有一个成熟的客户端 API,称为client-go。在这篇文章中,我只是记录了使用 client-go 创建 k8s 作业的过程,以便任何处理相同问题的人都会发现这很有帮助。
先决条件:
-
一个工作的 Kubernetes 设置,如果你没有多节点集群,你可以使用minikube或microk8s设置一个本地 kubernetes 环境。
-
Go 的基本知识是必要的。
-
必须设置一个有效的 Go 模块目录。
安装client-go包:
kubectl
kubernetesclientset
discovery
dynamic
plugin/pkg/client/auth
transport
tools/cache
要将 client-go 作为模块安装,您需要具有支持模块系统的最新 Go 版本。如果您想了解有关 Go 模块系统的更多信息,请参阅此教程。让我们创建一个名为“github.com/k8sjobs”的简单 go 模块
mkdir k8sjobs
cd k8sjobs && go mod init github.com/k8sjobs
进入全屏模式 退出全屏模式
go.mod
cat k8sjobs/go.mod
进入全屏模式 退出全屏模式
输出:
module github.com/k8sjobs
go 1.15
进入全屏模式 退出全屏模式
k8sjobs
go get k8s.io/client-go@v0.20.2
进入全屏模式 退出全屏模式
go.mod
module github.com/k8sjobs
go 1.15
require k8s.io/client-go v0.20.2 // indirect
进入全屏模式 退出全屏模式
go.modclient-gok8s.io/client-go
我们将建造什么?
flagmain.go
import (
"flag"
"fmt"
)
func main() {
jobName := flag.String("jobname", "test-job", "The name of the job")
containerImage := flag.String("image", "ubuntu:latest", "Name of the container image")
entryCommand := flag.String("command", "ls", "The command to run inside the container")
flag.Parse()
fmt.Printf("Args : %s %s %s\n", *jobName, *containerImage, *entryCommand)
}
进入全屏模式 退出全屏模式
mainjobNamecontainerNameimageNameclient-go
1\。连接到集群:
configconfig$HOME/.kube/configconnectToK8s
到目前为止,我们的进口:
import (
"flag"
"log"
"os"
"path/filepath"
clientcmd "k8s.io/client-go/1.5/tools/clientcmd"
"k8s.io/client-go/kubernetes"
)
进入全屏模式 退出全屏模式
connectToK8sclientset
func connectToK8s() *kubernetes.Clientset {
home, exists := os.LookupEnv("HOME")
if !exists {
home = "/root"
}
configPath := filepath.Join(home, ".kube", "config")
config, err := clientcmd.BuildConfigFromFlags("", configPath)
if err != nil {
log.Panicln("failed to create K8s config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Panicln("Failed to create K8s clientset")
}
return clientset
}
进入全屏模式 退出全屏模式
main
clientset := connectToK8s()
进入全屏模式 退出全屏模式
launchK8sJob
func launchK8sJob(clientset *kubernetes.Clientset, jobName *string, image *string, cmd *string)
进入全屏模式 退出全屏模式
但在开始之前,让我们回顾一下 K8s 作业规范的样子:
apiVersion: batch/v1
kind: Job
metadata:
name: ls-job
spec:
template:
spec:
containers:
- name: ls-job
image: ubuntu:latest
command: ["ls", "-aRil"]
restartPolicy: Never
backoffLimit: 4
进入全屏模式 退出全屏模式
batch/v1clientsetBatchv1Batchv1
以下是到目前为止的进口:
import (
"context"
"flag"
"log"
"os"
"path/filepath"
"strings"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubernetes "k8s.io/client-go/kubernetes"
clientcmd "k8s.io/client-go/tools/clientcmd"
)
进入全屏模式 退出全屏模式
launchK8sJob
func launchK8sJob(clientset *kubernetes.Clientset, jobName *string, image *string, cmd *string) {
jobs := clientset.BatchV1().Jobs("default")
var backOffLimit int32 = 0
jobSpec := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: *jobName,
Namespace: "default",
},
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: *jobName,
Image: *image,
Command: strings.Split(*cmd, " "),
},
},
RestartPolicy: v1.RestartPolicyNever,
},
},
BackoffLimit: &backOffLimit,
},
}
_, err := jobs.Create(context.TODO(), jobSpec, metav1.CreateOptions{})
if err != nil {
log.Fatalln("Failed to create K8s job.")
}
//print job details
log.Println("Created K8s job successfully")
}
进入全屏模式 退出全屏模式
main
package main
import (
"context"
"flag"
"log"
"os"
"path/filepath"
"strings"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubernetes "k8s.io/client-go/kubernetes"
clientcmd "k8s.io/client-go/tools/clientcmd"
)
func connectToK8s() *kubernetes.Clientset {
home, exists := os.LookupEnv("HOME")
if !exists {
home = "/root"
}
configPath := filepath.Join(home, ".kube", "config")
config, err := clientcmd.BuildConfigFromFlags("", configPath)
if err != nil {
log.Fatalln("failed to create K8s config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatalln("Failed to create K8s clientset")
}
return clientset
}
func launchK8sJob(clientset *kubernetes.Clientset, jobName *string, image *string, cmd *string) {
jobs := clientset.BatchV1().Jobs("default")
var backOffLimit int32 = 0
jobSpec := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: *jobName,
Namespace: "default",
},
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: *jobName,
Image: *image,
Command: strings.Split(*cmd, " "),
},
},
RestartPolicy: v1.RestartPolicyNever,
},
},
BackoffLimit: &backOffLimit,
},
}
_, err := jobs.Create(context.TODO(), jobSpec, metav1.CreateOptions{})
if err != nil {
log.Fatalln("Failed to create K8s job.")
}
//print job details
log.Println("Created K8s job successfully")
}
func main() {
jobName := flag.String("jobname", "test-job", "The name of the job")
containerImage := flag.String("image", "ubuntu:latest", "Name of the container image")
entryCommand := flag.String("command", "ls", "The command to run inside the container")
flag.Parse()
clientset := connectToK8s()
launchK8sJob(clientset, jobName, containerImage, entryCommand)
}
进入全屏模式 退出全屏模式
RestartPolicyNeverBackOffLimitBackOffLimitRestartPolicyNever
GOBINk8sjobs
export GOBIN=$PWD
go install
进入全屏模式 退出全屏模式
k8sjobs
./k8sjobs --jobname=test --image=alpine:latest --command="ls -aRil"
进入全屏模式 退出全屏模式
如果集群配置正确,命令必须成功退出,我们应该看到类似这样的输出。
2021/02/13 13:39:49 Created K8s job successfully
进入全屏模式 退出全屏模式
kubectlmicrok8skubectlmicrok8skubectlmicrok8s kubectlkubectl
microk8s kubectl get jobs
进入全屏模式 退出全屏模式
输出:
NAME COMPLETIONS DURATION AGE
test 1/1 73s 13m
进入全屏模式 退出全屏模式
输出显示作业已创建并成功完成。现在让我们描述一下这个作业,看看它是否是我们创建的同一个作业。
microk8s kubectl describe job test
进入全屏模式 退出全屏模式
输出:
Name: test
Namespace: default
Selector: controller-uid=48b7b61d-4ead-4033-91cd-035f539c27bf
Labels: controller-uid=48b7b61d-4ead-4033-91cd-035f539c27bf
job-name=test
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Sat, 13 Feb 2021 13:39:49 +0530
Completed At: Sat, 13 Feb 2021 13:41:02 +0530
Duration: 73s
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=48b7b61d-4ead-4033-91cd-035f539c27bf
job-name=test
Containers:
test:
Image: alpine:latest
Port: <none>
Host Port: <none>
Command:
ls
-aRil
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 15m job-controller Created pod: test-l9zwk
Normal Completed 14m job-controller Job completed
进入全屏模式 退出全屏模式
这是我们创建的作业的描述。您可以验证容器映像名称和命令是否与我们作为参数传递的相同。
client-go