Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在win11系统下的minikube start失败 #15904

Closed
kangdf opened this issue Feb 22, 2023 · 4 comments
Closed

在win11系统下的minikube start失败 #15904

kangdf opened this issue Feb 22, 2023 · 4 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@kangdf
Copy link

kangdf commented Feb 22, 2023

重现问题所需的命令:minikube start

失败的命令的完整输出


minikube start

W0222 23:46:15.712583 8128 out.go:239]
W0222 23:46:15.713107 8128 out.go:239] 💣 开启 cluster 时出错: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0222 15:42:14.124510 13492 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0222 23:46:15.714159 8128 out.go:239]
W0222 23:46:15.714691 8128 out.go:239] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
�[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m
I0222 23:46:15.716784 8128 out.go:177]
W0222 23:46:15.717308 8128 out.go:239] ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0222 15:42:14.124510 13492 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0222 23:46:15.717831 8128 out.go:239] 💡 建议:检查 'journalctl -xeu kubelet' 的输出,尝试启动 minikube 时添加参数 --extra-config=kubelet.cgroup-driver=systemd
W0222 23:46:15.717831 8128 out.go:239] 🍿 Related issue: #4172
I0222 23:46:15.718355 8128 out.go:177]

==> Docker <==
-- Logs begin at Wed 2023-02-22 15:33:44 UTC, end at Wed 2023-02-22 16:05:02 UTC. --
Feb 22 15:59:32 minikube dockerd[768]: time="2023-02-22T15:59:32.189687200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 15:59:32 minikube dockerd[768]: time="2023-02-22T15:59:32.196851800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:01 minikube dockerd[768]: time="2023-02-22T16:00:01.163310200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:01 minikube dockerd[768]: time="2023-02-22T16:00:01.168046500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:10 minikube dockerd[768]: time="2023-02-22T16:00:10.172096500Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:10 minikube dockerd[768]: time="2023-02-22T16:00:10.180094000Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:11 minikube dockerd[768]: time="2023-02-22T16:00:11.173339500Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:11 minikube dockerd[768]: time="2023-02-22T16:00:11.177842400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:14 minikube dockerd[768]: time="2023-02-22T16:00:14.162931600Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:14 minikube dockerd[768]: time="2023-02-22T16:00:14.167772500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:46 minikube dockerd[768]: time="2023-02-22T16:00:46.174162600Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:46 minikube dockerd[768]: time="2023-02-22T16:00:46.182019600Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:51 minikube dockerd[768]: time="2023-02-22T16:00:51.158767000Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:51 minikube dockerd[768]: time="2023-02-22T16:00:51.162393800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:56 minikube dockerd[768]: time="2023-02-22T16:00:56.168694700Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:56 minikube dockerd[768]: time="2023-02-22T16:00:56.175901500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:59 minikube dockerd[768]: time="2023-02-22T16:00:59.168246100Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:00:59 minikube dockerd[768]: time="2023-02-22T16:00:59.173199100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:30 minikube dockerd[768]: time="2023-02-22T16:01:30.144171100Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:30 minikube dockerd[768]: time="2023-02-22T16:01:30.150980100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:35 minikube dockerd[768]: time="2023-02-22T16:01:35.152138300Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:35 minikube dockerd[768]: time="2023-02-22T16:01:35.156279900Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:38 minikube dockerd[768]: time="2023-02-22T16:01:38.220562200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:38 minikube dockerd[768]: time="2023-02-22T16:01:38.222223200Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:42 minikube dockerd[768]: time="2023-02-22T16:01:42.152635800Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:01:42 minikube dockerd[768]: time="2023-02-22T16:01:42.154475800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:13 minikube dockerd[768]: time="2023-02-22T16:02:13.177721100Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:13 minikube dockerd[768]: time="2023-02-22T16:02:13.179931800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:18 minikube dockerd[768]: time="2023-02-22T16:02:18.162878200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:18 minikube dockerd[768]: time="2023-02-22T16:02:18.169550800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:22 minikube dockerd[768]: time="2023-02-22T16:02:22.152420300Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:22 minikube dockerd[768]: time="2023-02-22T16:02:22.156674100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:23 minikube dockerd[768]: time="2023-02-22T16:02:23.176581500Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:23 minikube dockerd[768]: time="2023-02-22T16:02:23.178643400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:55 minikube dockerd[768]: time="2023-02-22T16:02:55.160755700Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:02:55 minikube dockerd[768]: time="2023-02-22T16:02:55.162849000Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:01 minikube dockerd[768]: time="2023-02-22T16:03:01.205298100Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:01 minikube dockerd[768]: time="2023-02-22T16:03:01.208165100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:05 minikube dockerd[768]: time="2023-02-22T16:03:05.141204000Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:05 minikube dockerd[768]: time="2023-02-22T16:03:05.148748200Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:05 minikube dockerd[768]: time="2023-02-22T16:03:05.171525400Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:05 minikube dockerd[768]: time="2023-02-22T16:03:05.179237800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:36 minikube dockerd[768]: time="2023-02-22T16:03:36.175209400Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:36 minikube dockerd[768]: time="2023-02-22T16:03:36.177127000Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:46 minikube dockerd[768]: time="2023-02-22T16:03:46.170433200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:46 minikube dockerd[768]: time="2023-02-22T16:03:46.172660200Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:47 minikube dockerd[768]: time="2023-02-22T16:03:47.146001300Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:47 minikube dockerd[768]: time="2023-02-22T16:03:47.147797300Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:48 minikube dockerd[768]: time="2023-02-22T16:03:48.153613500Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:03:48 minikube dockerd[768]: time="2023-02-22T16:03:48.157512700Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:04:19 minikube dockerd[768]: time="2023-02-22T16:04:19.153358600Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:04:19 minikube dockerd[768]: time="2023-02-22T16:04:19.156239700Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:04:29 minikube dockerd[768]: time="2023-02-22T16:04:29.133471200Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:29 minikube dockerd[768]: time="2023-02-22T16:04:29.135307600Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:29 minikube dockerd[768]: time="2023-02-22T16:04:29.167022700Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:29 minikube dockerd[768]: time="2023-02-22T16:04:29.168952700Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:31 minikube dockerd[768]: time="2023-02-22T16:04:31.130041600Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:31 minikube dockerd[768]: time="2023-02-22T16:04:31.132035500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:05:00 minikube dockerd[768]: time="2023-02-22T16:05:00.153201600Z" level=info msg="Attempting next endpoint for pull after error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:05:00 minikube dockerd[768]: time="2023-02-22T16:05:00.155317300Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

==> describe nodes <==

==> dmesg <==
If you want to keep using the local clock, then add:
"trace_clock=local"
on the kernel command line
[ +6.186873] init: (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000005] failed 2
[ +0.058932] 9pnet_virtio: no channels available for device drvfs
[ +0.000011] WARNING: mount: waiting for virtio device...
[ +0.100229] 9pnet_virtio: no channels available for device drvfs
[ +0.100228] 9pnet_virtio: no channels available for device drvfs
[ +0.100231] 9pnet_virtio: no channels available for device drvfs
[ +0.131212] 9pnet_virtio: no channels available for device drvfs
[ +0.000010] WARNING: mount: waiting for virtio device...
[ +0.100195] 9pnet_virtio: no channels available for device drvfs
[ +0.100253] 9pnet_virtio: no channels available for device drvfs
[ +0.137675] 9pnet_virtio: no channels available for device drvfs
[ +0.000015] WARNING: mount: waiting for virtio device...
[ +0.100184] 9pnet_virtio: no channels available for device drvfs
[ +0.100310] 9pnet_virtio: no channels available for device drvfs
[ +0.139451] WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +10.276738] FS-Cache: Duplicate cookie detected
[ +0.000008] FS-Cache: O-cookie c=0000000098a737f5 [p=000000004b285944 fl=222 nc=0 na=1]
[ +0.000003] FS-Cache: O-cookie d=0000000038c48684 n=00000000496bc4fd
[ +0.000001] FS-Cache: O-key=[10] '34323934393339313136'
[ +0.000020] FS-Cache: N-cookie c=00000000e7ec1398 [p=000000004b285944 fl=2 nc=0 na=1]
[ +0.000002] FS-Cache: N-cookie d=0000000038c48684 n=0000000078928de0
[ +0.000002] FS-Cache: N-key=[10] '34323934393339313136'
[ +0.000927] init: (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000006] failed 2
[ +0.076727] FS-Cache: Duplicate cookie detected
[ +0.000009] FS-Cache: O-cookie c=0000000053a9126a [p=000000004b285944 fl=222 nc=0 na=1]
[ +0.000003] FS-Cache: O-cookie d=0000000038c48684 n=0000000085080a05
[ +0.000002] FS-Cache: O-key=[10] '34323934393339313234'
[ +0.000017] FS-Cache: N-cookie c=000000004d4e4850 [p=000000004b285944 fl=2 nc=0 na=1]
[ +0.000078] FS-Cache: N-cookie d=0000000038c48684 n=00000000a1327540
[ +0.000005] FS-Cache: N-key=[10] '34323934393339313234'
[ +0.035679] WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +5.929696] FS-Cache: Duplicate cookie detected
[ +0.000004] FS-Cache: O-cookie c=000000003f7d3331 [p=000000004b285944 fl=222 nc=0 na=1]
[ +0.000001] FS-Cache: O-cookie d=0000000038c48684 n=000000000daff629
[ +0.000001] FS-Cache: O-key=[10] '34323934393339373231'
[ +0.000005] FS-Cache: N-cookie c=000000007352fffe [p=000000004b285944 fl=2 nc=0 na=1]
[ +0.000001] FS-Cache: N-cookie d=0000000038c48684 n=00000000dfec3ade
[ +0.000000] FS-Cache: N-key=[10] '34323934393339373231'
[ +0.000304] init: (1) ERROR: ConfigApplyWindowsLibPath:2474: open /etc/ld.so.conf.d/ld.wsl.conf
[ +0.000001] failed 2
[ +0.000669] init: (2) ERROR: UtilCreateProcessAndWait:653: /bin/mount failed with 2
[ +0.000077] init: (1) ERROR: UtilCreateProcessAndWait:673: /bin/mount failed with status 0x
[ +0.000002] ff00
[ +0.000005] init: (1) ERROR: ConfigMountFsTab:2529: Processing fstab with mount -a failed.
[ +0.009901] 9pnet_virtio: no channels available for device drvfs
[ +0.000006] WARNING: mount: waiting for virtio device...
[ +0.109060] 9pnet_virtio: no channels available for device drvfs
[ +0.000005] WARNING: mount: waiting for virtio device...
[ +0.108110] 9pnet_virtio: no channels available for device drvfs
[ +0.000005] WARNING: mount: waiting for virtio device...
[ +0.109728] WARNING: /usr/share/zoneinfo/Asia/Shanghai not found. Is the tzdata package installed?
[ +0.092179] init: (8) ERROR: CreateProcessEntryCommon:440: getpwuid(0) failed 2
[ +0.000005] init: (8) ERROR: CreateProcessEntryCommon:443: getpwuid(0) failed 2
[ +3.758565] cgroup: runc (630) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.
[ +0.000000] cgroup: "memory" requires setting use_hierarchy to 1 on the root

==> kernel <==
16:05:02 up 33 min, 0 users, load average: 0.22, 0.08, 0.08
Linux minikube 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"

==> kubelet <==
-- Logs begin at Wed 2023-02-22 15:33:44 UTC, end at Wed 2023-02-22 16:05:02 UTC. --
Feb 22 16:04:19 minikube kubelet[13641]: E0222 16:04:19.157173 13641 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout"
Feb 22 16:04:19 minikube kubelet[13641]: E0222 16:04:19.157252 13641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout" pod="kube-system/kube-controller-manager-minikube"
Feb 22 16:04:19 minikube kubelet[13641]: E0222 16:04:19.157273 13641 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 142.250.157.82:443: i/o timeout" pod="kube-system/kube-controller-manager-minikube"
Feb 22 16:04:19 minikube kubelet[13641]: E0222 16:04:19.157339 13641 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-controller-manager-minikube_kube-system(5175bba984ed52052d891b5a45b584b6)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-controller-manager-minikube_kube-system(5175bba984ed52052d891b5a45b584b6)\": rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.6\": Error response from daemon: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 142.250.157.82:443: i/o timeout"" pod="kube-system/kube-controller-manager-minikube" podUID=5175bba984ed52052d891b5a45b584b6
Feb 22 16:04:19 minikube kubelet[13641]: I0222 16:04:19.982708 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:19 minikube kubelet[13641]: E0222 16:04:19.983064 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:20 minikube kubelet[13641]: E0222 16:04:20.439929 13641 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17462fdc9a338f34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 768469300, time.Local), LastTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 833571700, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.17462fdc9a338f34": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Feb 22 16:04:21 minikube kubelet[13641]: W0222 16:04:21.974943 13641 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:21 minikube kubelet[13641]: E0222 16:04:21.975021 13641 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:24 minikube kubelet[13641]: E0222 16:04:24.615495 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:25 minikube kubelet[13641]: E0222 16:04:25.903404 13641 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node "minikube" not found"
Feb 22 16:04:26 minikube kubelet[13641]: I0222 16:04:26.990575 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:26 minikube kubelet[13641]: E0222 16:04:26.990875 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.135891 13641 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.135976 13641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/etcd-minikube"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.135995 13641 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/etcd-minikube"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.136054 13641 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "etcd-minikube_kube-system(a121e106627e5c6efa9ba48006cc43bf)" with CreatePodSandboxError: "Failed to create sandbox for pod \"etcd-minikube_kube-system(a121e106627e5c6efa9ba48006cc43bf)\": rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.6\": Error response from daemon: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 108.177.125.82:443: i/o timeout"" pod="kube-system/etcd-minikube" podUID=a121e106627e5c6efa9ba48006cc43bf
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.169513 13641 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.169575 13641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-apiserver-minikube"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.169595 13641 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-apiserver-minikube"
Feb 22 16:04:29 minikube kubelet[13641]: E0222 16:04:29.169654 13641 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-apiserver-minikube_kube-system(5239bb256c1be9f71fd10c884d9299b1)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-apiserver-minikube_kube-system(5239bb256c1be9f71fd10c884d9299b1)\": rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.6\": Error response from daemon: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 108.177.125.82:443: i/o timeout"" pod="kube-system/kube-apiserver-minikube" podUID=5239bb256c1be9f71fd10c884d9299b1
Feb 22 16:04:30 minikube kubelet[13641]: E0222 16:04:30.440903 13641 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17462fdc9a338f34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 768469300, time.Local), LastTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 833571700, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.17462fdc9a338f34": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Feb 22 16:04:31 minikube kubelet[13641]: E0222 16:04:31.132459 13641 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:04:31 minikube kubelet[13641]: E0222 16:04:31.132516 13641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-scheduler-minikube"
Feb 22 16:04:31 minikube kubelet[13641]: E0222 16:04:31.132534 13641 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-scheduler-minikube"
Feb 22 16:04:31 minikube kubelet[13641]: E0222 16:04:31.132589 13641 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-scheduler-minikube_kube-system(197cd0de602d7cb722d0bd2daf878121)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-scheduler-minikube_kube-system(197cd0de602d7cb722d0bd2daf878121)\": rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.6\": Error response from daemon: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 108.177.125.82:443: i/o timeout"" pod="kube-system/kube-scheduler-minikube" podUID=197cd0de602d7cb722d0bd2daf878121
Feb 22 16:04:31 minikube kubelet[13641]: E0222 16:04:31.616872 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:33 minikube kubelet[13641]: I0222 16:04:33.998202 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:33 minikube kubelet[13641]: E0222 16:04:33.998535 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:35 minikube kubelet[13641]: W0222 16:04:35.897374 13641 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:35 minikube kubelet[13641]: E0222 16:04:35.897445 13641 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:35 minikube kubelet[13641]: E0222 16:04:35.904237 13641 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node "minikube" not found"
Feb 22 16:04:38 minikube kubelet[13641]: W0222 16:04:38.153365 13641 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:38 minikube kubelet[13641]: E0222 16:04:38.153428 13641 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:38 minikube kubelet[13641]: E0222 16:04:38.618408 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:39 minikube kubelet[13641]: E0222 16:04:39.827239 13641 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:40 minikube kubelet[13641]: E0222 16:04:40.441584 13641 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17462fdc9a338f34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 768469300, time.Local), LastTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 833571700, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.17462fdc9a338f34": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Feb 22 16:04:41 minikube kubelet[13641]: I0222 16:04:41.005820 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:41 minikube kubelet[13641]: E0222 16:04:41.006274 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:45 minikube kubelet[13641]: E0222 16:04:45.620194 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:45 minikube kubelet[13641]: E0222 16:04:45.904995 13641 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node "minikube" not found"
Feb 22 16:04:48 minikube kubelet[13641]: I0222 16:04:48.023993 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:48 minikube kubelet[13641]: E0222 16:04:48.024607 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:48 minikube kubelet[13641]: W0222 16:04:48.657072 13641 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:48 minikube kubelet[13641]: E0222 16:04:48.657301 13641 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:50 minikube kubelet[13641]: E0222 16:04:50.442788 13641 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17462fdc9a338f34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 768469300, time.Local), LastTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 833571700, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.17462fdc9a338f34": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Feb 22 16:04:52 minikube kubelet[13641]: E0222 16:04:52.621736 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:55 minikube kubelet[13641]: I0222 16:04:55.033595 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:04:55 minikube kubelet[13641]: E0222 16:04:55.033896 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"
Feb 22 16:04:55 minikube kubelet[13641]: E0222 16:04:55.906917 13641 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node "minikube" not found"
Feb 22 16:04:56 minikube kubelet[13641]: W0222 16:04:56.768621 13641 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:56 minikube kubelet[13641]: E0222 16:04:56.768694 13641 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:04:59 minikube kubelet[13641]: E0222 16:04:59.623307 13641 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused
Feb 22 16:05:00 minikube kubelet[13641]: E0222 16:05:00.155920 13641 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout"
Feb 22 16:05:00 minikube kubelet[13641]: E0222 16:05:00.155997 13641 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-controller-manager-minikube"
Feb 22 16:05:00 minikube kubelet[13641]: E0222 16:05:00.156033 13641 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image "registry.k8s.io/pause:3.6": Error response from daemon: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 108.177.125.82:443: i/o timeout" pod="kube-system/kube-controller-manager-minikube"
Feb 22 16:05:00 minikube kubelet[13641]: E0222 16:05:00.156084 13641 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-controller-manager-minikube_kube-system(5175bba984ed52052d891b5a45b584b6)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-controller-manager-minikube_kube-system(5175bba984ed52052d891b5a45b584b6)\": rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.6\": Error response from daemon: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 108.177.125.82:443: i/o timeout"" pod="kube-system/kube-controller-manager-minikube" podUID=5175bba984ed52052d891b5a45b584b6
Feb 22 16:05:00 minikube kubelet[13641]: E0222 16:05:00.445534 13641 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17462fdc9a338f34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 768469300, time.Local), LastTimestamp:time.Date(2023, time.February, 22, 15, 42, 15, 833571700, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events/minikube.17462fdc9a338f34": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping)
Feb 22 16:05:02 minikube kubelet[13641]: I0222 16:05:02.042747 13641 kubelet_node_status.go:70] "Attempting to register node" node="minikube"
Feb 22 16:05:02 minikube kubelet[13641]: E0222 16:05:02.043188 13641 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="minikube"

使用的操作系统版本
Windows 11 专业版
21H2
22000.194
Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz 1.61 GHz
64 位操作系统, 基于 x64 的处理器

Microsoft Windows 11 Pro 10.0.22000.194 Build 22000.194 上的 minikube v1.29.0
正在 Docker 20.10.23 中准备 Kubernetes v1.26.1

@kangdf kangdf added the l/zh-CN Issues in or relating to Chinese label Feb 22, 2023
@cubxxw
Copy link

cubxxw commented Mar 17, 2023

WSL2? Please download the binary directly instead of make

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 15, 2023
@cubxxw
Copy link

cubxxw commented Jun 15, 2023

/close

@k8s-ci-robot
Copy link
Contributor

@cubxxw: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants