kubeadm 创建 k8s集群(实战)

半兽人 发表于: 2020-11-03   最后更新时间: 2023-06-27 19:14:42  
{{totalSubscript}} 订阅, 6,388 游览

参数介绍

  • 节点 ip:10.211.55.5
  • Pod ip:192.168.0.0/16
  • Servce ip:10.96.0.0/12

这3个ip中

  • 节点 ip:你自己当前节点机器的IP。
  • pod ip:创建容器时,分配的IP段范围。
  • service ip:负载均衡的ip段范围,负载的就是pod ip

除了第一个节点 ip,其他的2个可以不指定,但是之后你会发现,创建Kubernetes集群最常用的就是指定这3个。

另外:控制平面其实指的就是master节点,由于后面kubernetes改名了,英文是control-plane,译过来就是控制平面了。

初始化控制节点

国内改为阿里镜像

kubeadm init --apiserver-advertise-address=10.211.55.5 \
             --pod-network-cidr=192.168.0.0/16 \
             --service-cidr=10.96.0.0/12 \
             --image-repository registry.aliyuncs.com/google_containers

输出:

[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.501735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: <token>
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

执行下面的命令,给用户增加kubectl配置,下面的命令也是 kubeadm init 输出的一部分:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

控制平面节点隔离

ps:如果是单节点,执行解除,否则你的网络组件因为没有工作节点可能会失败

默认情况下,出于安全原因,集群不会在Master节点上调度 Pod。 如果你希望能够在Master节点上调度Pod,请运行:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

输出如下:

node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

这将从任何拥有 node-role.kubernetes.io/master taint 标记的节点中移除该标记, 包括控制平面节点,这意味着调度程序将能够在任何地方调度 Pods。

加入节点

节点是你的工作负载(容器和 Pod 等)运行的地方。要将新节点添加到集群,请对每台计算机执行以下操作:

  • SSH 到机器
  • 切换为 root 用户(例如 sudo su -

从上一步 kubeadm init 输出结果中获取以下信息,然后执行。

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

如果没有--token,可以通过在控制节点上运行以下命令来获取令牌:

kubeadm token list

输出类似于以下内容:

TOKEN                    TTL  EXPIRES              USAGES           DESCRIPTION            EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi  23h  2018-06-12T02:51:28Z authentication,  The default bootstrap  system:
                                                   signing          token generated by     bootstrappers:
                                                                    'kubeadm init'.        kubeadm:
                                                                                           default-node-token

默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群,则可以通过在控制节点上运行以下命令来创建新token:

kubeadm token create

输出类似于以下内容:

5didvk.d09sbcov8ph2amjw

如果你没有 --discovery-token-ca-cert-hash 的值,则可以通过在控制节点上执行以下命令链来获取它:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

输出类似于以下内容:

8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

要为 <control-plane-host>:<control-plane-port> 指定 IPv6 元组,必须将 IPv6 地址括在方括号中,例如:[fd00::101]:2073

输出应类似于:

[preflight] Running pre-flight checks

... (log output of join workflow) ...

Node join complete:
* Certificate signing request sent to control-plane and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on control-plane to see this machine join.

几秒钟后,当你在控制平面节点上执行 kubectl get nodes,你会注意到该节点出现在输出中。

安装 Pod 网络附加组件

calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

## 注意,如果你init配置的是 --pod-network-cidr=192.168.0.0/16,那就不用改,直接运行即可,否则你需要把文件先下下来来,改成你配置的,在创建
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml

验证集群

当以上操作全部完成之后,可执行

kubectl get pods -A

确认所有组件都是Running状态。

然后输入:

kubectl get nodes

确认所有节点节点为Ready

Ref

更新于 2023-06-27

想个好名字 1年前

大佬这个地址

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

也进不去 哭

你是好同志呀,国内的:

kubectl create -f https://www.kubebiz.com/raw/KubeBiz/calico/v3.26.1/tigera-operator.yaml
kubectl create -f https://www.kubebiz.com/raw/KubeBiz/calico/v3.26.1/custom-resources.yaml

这两个有何不同呢 都要执行吗?

当然,2个都要执行。

想问问这两个文件的作用具体作用是什么呢

想个好名字 1年前
kube-system       coredns-7bdc4cb885-nd7f5            0/1     Pending   0          56m

没有running怎么办

kubectl describe coredns-7bdc4cb885-nd7f5 -n kube-system

看看报错原因。

Warning FailedScheduling 10m (x30 over 7h20m) default-scheduler 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..

初学不太懂 麻烦大佬了

等你先把2个都执行完,在看看错误变化了没。

删除这两个coredns-会自动重新生成吗?

Error from server (AlreadyExists): error when creating "https://www.kubebiz.com/raw/KubeBiz/calico/v3.26.1/tigera-operator.yaml": deployments.apps "tigera-operator" already exists

有单独重启pods这个命令存在吗

delete命令:

kubectl delete pods -f coredns-7bdc4cb885-nd7f5 -n kube-system

这个好了 现在node notready 了 明明重启一下 工作节点kubelet服务都起不来了

大哥我想问下 服务排错怎么个思路!!

通过系统日志定位错误吧:

journalctl -f

这个我知道日志看见err看不懂 哪些是关键字段呢

查看kubeadm更多相关的文章或提一个关于kubeadm的问题,也可以与我们一起分享文章