第一次搭建 k8s, kubeadm init 出这个错,大佬们帮忙看看,端口 6443 是哪个服务呀

2023-03-02 18:47:23 +08:00
 proxytoworld

在 Ubuntu 2004 跑,IP: 172.31.64.157 ,按照教程跑



root@ubuntu:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:56:50Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}


kubeadm init   --pod-network-cidr=10.244.0.0/16   --cri-socket=unix:/run/containerd/containerd.sock   --image-repository=registry.aliyuncs.com/google_containers  --apiserver-advertise-address=172.31.64.157 --v=5


[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0302 10:41:01.843554   26058 manifests.go:99] [control-plane] getting StaticPodSpecs
I0302 10:41:01.843797   26058 certs.go:519] validating certificate period for CA certificate
I0302 10:41:01.843883   26058 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0302 10:41:01.843892   26058 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0302 10:41:01.843899   26058 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0302 10:41:01.843905   26058 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0302 10:41:01.843912   26058 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0302 10:41:01.843918   26058 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0302 10:41:01.846863   26058 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0302 10:41:01.846894   26058 manifests.go:99] [control-plane] getting StaticPodSpecs
I0302 10:41:01.847150   26058 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0302 10:41:01.847167   26058 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0302 10:41:01.847189   26058 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0302 10:41:01.847216   26058 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0302 10:41:01.847235   26058 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0302 10:41:01.847247   26058 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0302 10:41:01.847258   26058 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0302 10:41:01.847270   26058 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0302 10:41:01.848363   26058 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0302 10:41:01.851515   26058 manifests.go:99] [control-plane] getting StaticPodSpecs
I0302 10:41:01.851928   26058 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0302 10:41:01.854200   26058 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0302 10:41:01.854985   26058 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0302 10:41:01.855019   26058 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0302 10:41:02.858910   26058 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://172.31.64.157:6443/healthz?timeout=10s
I0302 10:41:03.859894   26058 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://172.31.64.157:6443/healthz?timeout=10s
I0302 10:41:04.861700   26058 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://172.31.64.157:6443/healthz?timeout=10s
I0302 10:41:05.862985   26058 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://172.31.64.157:6443/healthz?timeout=10s
I0302 10:41:06.864531   26058 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://172.31.64.157:6443/healthz?timeout=10s

kubelet 服务状态


root@ubuntu:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Thu 2023-03-02 10:41:01 UTC; 4min 43s ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 26191 (kubelet)
      Tasks: 14 (limit: 19066)
     Memory: 29.1M
        CPU: 2.794s
     CGroup: /system.slice/kubelet.service
             └─26191 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoin>

Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.207948   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.308259   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.380894   26191 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox im>
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.380958   26191 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \>
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.380987   26191 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \>
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.381042   26191 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-ubuntu_kube-system(f7c>
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.409374   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.510227   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.611099   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"
Mar 02 10:45:45 ubuntu kubelet[26191]: E0302 10:45:45.711709   26191 kubelet.go:2448] "Error getting node" err="node \"ubuntu\" not found"

1877 次点击
所在节点    Kubernetes
19 条回复
Cola98
2023-03-02 19:02:57 +08:00
err="node \"ubuntu\" not found 提示节点没有找到,之前 DNS 哪些步骤做了没得? 6443 是 api server 的端口
NoirStrike
2023-03-02 19:09:59 +08:00
看着貌似 kubelet 就没把 api-server 那几个静态 pod 拉起运行
chenk008
2023-03-02 19:10:07 +08:00
看看 apiserver 是不是没起来
crictl ps -a
seers
2023-03-02 19:11:45 +08:00
提前把那些镜像拉下来,因为墙原因 sandbox pause 镜像拉不到
seers
2023-03-02 19:12:51 +08:00
kubeadm init --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
或者指定国内源
mmm159357456
2023-03-02 19:12:59 +08:00
我怎么觉得是 hostname 的原因
proxytoworld
2023-03-02 19:16:57 +08:00
@seers 我之前用 docker pull 拉了镜像,但不知道怎么起
proxytoworld
2023-03-02 19:17:24 +08:00
@NoirStrike 请问一下怎么拉起来运行,手动
proxytoworld
2023-03-02 19:19:57 +08:00
@chenk008 报错了看样子是没起来
root@ubuntu:~# crictl ps -a
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
E0302 11:18:46.827213 29642 remote_runtime.go:390] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
FATA[0000] listing containers: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
proxytoworld
2023-03-02 19:23:13 +08:00
@Cola98 我用 lsof 看了一下 6443 没有进程监听,应该是 api-server 没起来,但我不知道怎么手动起
proxytoworld
2023-03-02 19:25:50 +08:00
@seers 用你说的命令,输出如下,看来还是 kubelet 的问题
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.


Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
NoirStrike
2023-03-02 20:08:08 +08:00
@proxytoworld #8 你先解决 kubelet 的问题吧
Cola98
2023-03-02 21:43:11 +08:00
@proxytoworld 实在不行,先给 docker 配置下代理,dockerd: https://docs.docker.com/engine/reference/commandline/dockerd/
Cola98
2023-03-02 21:44:02 +08:00
@proxytoworld 好像 api-server 可以用 systemctl 来启,忘记了,这个还是查下吧
leaflxh
2023-03-03 10:23:09 +08:00
kubeadm reset 掉然后重新带上镜像源 init 试试吧
leaflxh
2023-03-03 10:25:18 +08:00
(当初第一次搭,排查了一遍也是连不上 api-server ,最后发现 k8s.gcr.io 的那一堆镜像都没拉下来,最后用的 k3s ,自带一键脚本
proxytoworld
2023-03-03 11:02:31 +08:00
@Cola98 docker pull 所需镜像之后怎么启动呀,要加什么命令行参数吗
Cola98
2023-03-03 13:52:51 +08:00
重新安装下就好了
LaoLeyuan
2023-03-03 14:15:51 +08:00
推荐一下 https://github.com/lework/kainstall 这个库,大幅缩短了搭建 k8s 的步骤,最好配合该作者的离线文件使用。

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/920599

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX