☀ 获取信息
列出状态为运行,命名空间为 kube-system 的 pods
kubectl get pods --field-selector=status.phase=Running -n kube-system
列出所有 pv 并以 name 字段排序
kubectl get pv --sort-by=.metadata.name
列出所有 pv 并以容量排序,并输出到指定文件
kubectl get pv -A --sort-by={.spec.capacity.storage} > xxx.txt
列出指定pod的日志中状态为Error的行,并记录在指定的文件上
kubectl logs <podname> | grep Error > /opt/KUCC000xxx/KUCC000xxx.txt
列出 Service 名为 test 下的 pod,并找出使用CPU使用率最高的一个,将pod名称写入文件中
kubectl get svc test --show-labels
kubectl get pod -l app=test
kubectl top pod
列出可用节点,不包含不可调度的 和 NoReachable 的节点,并把数字写入到文件里
kubectl get nodes
☀ 创建并调度 pod
创建一个 pod,名为 nginx,并将其调度到有 disktype=ssd 标签的节点上
https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disktype: ssd
☀ 创建并调度 pod
创建一个 pod,名为 nginx,并将其调度到名为 foo-node 的节点上
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: foo-node
containers:
- name: nginx
image: nginx
☀ 创建多容器 pod
在命名空间 cka 内创建一个 pod,名为 podx4,内含四个指定的镜像 nginx、redis、memcached、busybox
apiVersion: v1
kind: Pod
metadata:
labels:
run: podx4
name: podx4
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: busybox
image: busybox
kubectl apply -f podx4.yaml -n cka
☀ 创建带存储 pod
创建一个 pod,名为 pod-npv,镜像为 nginx
创建 non-Persistent 类型 Volume,名为 cache-volume,挂载在 /data 目录
apiVersion: v1
kind: Pod
metadata:
name: pod-npv
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: cache-volume
mountPath: /data
volumes:
- name: cache-volume
: {}
☀ 创建关联服务 pod
创建一个 Pod,名为 nginx-app,镜像为 nginx,并根据 pod 创建名为 nginx-app 的 Service,type 为 NodePort
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-app
labels:
run: nginx-app
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app
spec:
selector:
run: nginx-app
ports:
- name: http
protocol: TCP
port: 80
- name: https
protocol: TCP
port: 443
type: NodePort
☀ 创建包含 Init 容器的 Pod
创建一个 pod 的 yaml,要求添加 Init Container,
Init Container的作用是创建一个空文件
pod 的 Containers 判断文件是否存在,不存在则退出
apiVersion: v1
kind: Pod
metadata:
name: pod-with-init
labels:
app: pod-with-init
spec:
volumes:
- name: workdir
emptyDir: {}
initContainers:
- name: touch
image: busybox:1.28
command: ['sh', '-c', 'touch /workdir/emptyfile']
volumeMounts:
- name: workdir
mountPath: /workdir
containers:
- name: pwi
image: busybox:1.28
command: ['sh', '-c', 'ls /opt/emptyfile || exit 1']
volumeMounts:
- name: workdir
mountPath: /opt
☀ 创建 DaemonSet pod
创建一个 nginx 的 DaemonSet,保证其在每个节点上运行
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: nginx-daemonset
image: nginx
☀ 静态 pod
在节点上配置 kubelet 托管启动一个静态 pod
# 查看配置文件地址
systemctl status kubelet | grep -e "--config"
# 查看静态pod地址
cat /var/lib/kubelet/config.yaml | grep staticPodPath:
apiVersion: v1
kind: Pod
metadata:
name: static-pod-nginx
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
☀ 创建关联服务 deployment
创建一个 deployment 并暴露 Service
方法一
kubectl create deployment my-deploy --image=nginx
kubectl expose deployment my-deploy --port=80 --target-port=80 --type=NodePort
方法二
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy-tp
template:
metadata:
labels:
app: my-deploy-tp
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-deploy-svc
labels:
app: my-deploy-svc
spec:
selector:
app: my-deploy-tp
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
☀ deployment 弹性伸缩
将名为 nginx-app 的 deployment 的副本数变成 4
kubectl scale deployment nginx-app --replicas=4
☀ 镜像升级回滚
创建一个 deployment,名为 deploy-rollout,镜像为 nginx:1.11.0-alpine
修改镜像版本为 nginx:1.11.3-alpine,并记录升级
再使用回滚,将镜像回滚至 nginx:1.11.0-alpine
# 创建 deployment
kubectl create deployment deploy-rollout --image=nginx:1.11.0-alpine
# 修改镜像
kubectl set image deployment deploy-rollout nginx=nginx:1.11.3-alipne
# [可选] 升级记录添加注释
kubectl annotate deployment [deployment_name] kubernetes.io/change-cause="[notes]"
# 回滚
kubectl rollout undo deployment nginx-rollout
# 查看历史
kubectl rollout history deployment nginx-rollout
☀ dns 解析
创建一个 pod,名为 nginx-nslookup,镜像为 nginx
创建对应的 svc,名为 nginx-nslookup-svc
并使用 nslookup 查找出 service 和 pod 的 dns 记录,并写入到指定的文件中
# 创建 busybox 用于执行 nslookup
kubectl run busybox-nslookup --image=busybox --command -- sleep 3600
# 创建对象
kubectl run nginx-nslookup --image=nginx
# 暴露端口 方法一
kubectl expose pod nginx-nslookup --type=NodePort --port=80 --target-port=80 --name=nginx-nslookup-svc
# 暴露端口 方法二
kubectl create svc nodeport nginx-nslookup-svc --tcp=80:80
#获取pod的ip地址
kubeclt get pod -o wide
# 解析
kubectl exec busybox -- nslookup [ip]
kubectl exec busybox -- nslookup [hostname]
☀ Secret
创建一个 secret 包含 username 和 password
创建一个 pod,挂载该 secret 到 /etc/foo
创建另一个 pod,使用环境变量引用该 secret
---
apiVersion: v1
kind: Secret
metadata:
name: x-secret
data:
username: dWdseQo=
password: YmVhdXRpZnVsCg==
---
apiVersion: v1
kind: Pod
metadata:
name: secret-1
labels:
run: secret-1
spec:
containers:
- name: secret-1
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /etc/foo
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: secret-volume
secret:
secretName: x-secret
---
apiVersion: v1
kind: Pod
metadata:
name: secret-2
labels:
run: secret-2
spec:
containers:
- name: secret-2
image: nginx
env:
- name: XXA
valueFrom:
secretKeyRef:
name: x-secret
key: username
- name: XXB
valueFrom:
secretKeyRef:
name: x-secret
key: password
dnsPolicy: ClusterFirst
restartPolicy: Always
# 手动创建 secret 方法
k create secret generic x-secret --from-literal=password=password --from-literal=username=username
# 验证
kubectl exec secret-1 -- /bin/sh -c 'cat /etc/foo/username'
kubectl exec secret-1 -- /bin/sh -c 'cat /etc/foo/password'
kubectl exec secret-2 -- /bin/sh -c 'echo $XXA'
kubectl exec secret-2 -- /bin/sh -c 'echo $XXB'
☀ RBAC
创建一个名为 deployment-clusterrole 的 clusterrole
且仅允许创建 Deployment,StatefulSet,DaemonSet
创建名为 cicd-token 的 ServiceAccount
将 ClusterRole 与 ServiceAccount 绑定
# 创建 clusterrole
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
# 创建 ServiceAccount
kubectl create serviceaccount cicd-token
# 绑定
kubectl create rolebinding xxx --clusterrole=deployment-clusterrole --serviceaccount=cicd-token:[namespace]
☀ 节点调度
将 ek8s-node-0 节点设置为不可用,并重新调度该节点上所有运行的 pods
# 禁止Pod调度到该节点
kubectl cordon ek8s-node-0
# 驱逐节点所有Pod
kubectl drain ek8s-node-0 --ignore-daemonsets
☀ 集群升级
现有 kubernetes 集群正在运行的版本是 1.21.0
仅将主节点 k8sm1 上的所有 kubernetes 控制平面和节点组件升级到版本 1.21.1
请不要升级 etcd
在主节点上升级 kubelet 和 kubectl
确保再升级前drain主节点,并在升级后uncordon主节点
# 禁止调度到该节点
kubectl cordon k8sm1
# 驱逐节点pod
kubectl drain k8sm1 --ignore-daemonsets
# 安装新版 kubeadm
yum install kubeadm-1.21.1
# 查看升级计划
kubeadm upgrade plan
# 升级
kubeadm upgrade apply v1.21.1 -etcd-upgrade=false
# 升级 kubelet kubectl
yum install kubelet-1.21.1 kubectl-1.21.1
# 重启
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8sm1
# 检查版本
kubeadm upgrade plan
☀ 日志监控
监控Pod foobar的日志,并提取错误“unable-to-access-website”对应的日志行。把它们写到/opt/KULM00612/foobar。
kubectl logs foobar | grep "unable-to-access-website" > /opt/KULM00612/foobar
☀ 节点 Debug
给出一个失联节点的集群,排查节点故障,要保证改动是永久的。
多半就是 kubelet 服务未启动
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
#查看集群状态
kubectl get nodes
#查看故障节点信息
kubectl describe node node1
#Message显示kubelet无法访问(记不清了)
#进入故障节点
ssh node1
#查看节点中的kubelet进程
ps -aux | grep kubelete
#没找到kubelet进程,查看kubelet服务状态
systemctl status kubelet.service
#kubelet服务没启动,启动服务并观察
systemctl start kubelet.service
#启动正常,enable服务
systemctl enable kubelet.service
#回到考试节点并查看状态
exit
kubectl get nodes #正常
☀ 创建 hostPath PV
创建一个 pv,类型是 hostPath,位于 /data,大小 1Gi,模式为 ReadOnlyMany
参考: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-host
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: xxzc
hostPath:
path: /data
☀ 使用边车容器运行日志代理
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox:1.28
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox:1.28
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-2
image: busybox:1.28
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
☀ xxxxxxxxxxxxxxxxxxxetcd
使用etcd 备份功能备份etcd(提供enpoints,ca、cert、key) ETCD备份恢复
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 \
--cacert=ca.pem --cert=cert.pem --key=key.pem \
snapshot save snapshotdb
export ETCDCTL_API=3
etcdctl --endpoints=https://127.0.0.1:2379 --cacert="/opt/xxxxxx"
备份etcd
yum -y install etcd
查看etcd密钥位置
ls /etc/kubernetes/pki/etcd/
ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key
查看etcd的帮助信息
ETCDCTL_API=3 etcdctl -h
snapshot save Stores an etcd node backend snapshot to a given file
snapshot restore Restores an etcd member snapshot to an etcd directory
snapshot status Gets backend snapshot status of a given file
--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
--cert="" identify secure client using this TLS certificate file
--endpoints=[127.0.0.1:2379] gRPC endpoints
--key="" identify secure client using this TLS key file
备份etcd
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot save snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key"
Snapshot saved at snapshotdb
查看备份的文件
ls
snapshotdb
查看文件的状态
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot status snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key"
593dda57, 480925, 1532, 3.3 MB
恢复etcd
mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests.bak
mv /var/lib/etcd/ /var/lib/etcd.bak
ETCDCTL_API=3 etcdctl snapshot restore snapshotdb --data-dir=/var/lib/etcd
2020-12-20 15:30:29.156579 I | mvcc: restore compact to 479751
2020-12-20 15:30:29.176899 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
启动etcd和apiserver
mv /etc/kubernetes/manifests.bak/ /etc/kubernetes/manifests
xxx创建一个Pod使用PV自动供给
使用hostpath方式提供pv(首先在所有的node节点上面创建/mnt/data目录)
sudo mkdir /mnt/data
sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
创建pv(一般运维人员创建)
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
app: my-pv
name: my-pv
spec:
#storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data/
kubectl create -f pv.yaml
创建pvc(一般开发人员创建)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
type: local
name: my-pvc
spec:
#storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
kubectl create -f pvc.yaml
使用pvc创建pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: my-pod
name: my-pod
spec:
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: my-pvc
containers:
- image: nginx
name: my-pod
volumeMounts:
- name: test-pvc
mountPath: "/usr/share/nginx/html"
kubectl create -f my-pod.yaml
测试
kubectl exec -i -t my-pod -- curl localhost
Hello from Kubernetes storage
Bootstrap Token方式增加一台Node
https://www.cnblogs.com/hlc-123/articles/14163603.html
给出一个集群,将节点node1添加到集群中,并使用TLS bootstrapping
参考:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kube-controller-manager-configuration
创建一个ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
kubectl create -f ingress.yaml
kubectl get ingress
default命名空间下所有pod可以互相访问,也可以访问其他命名空间Pod,但其他命名空间不能访问default命名空间Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
kubectl create -f network_policy.yaml
必考的题目
- etcd 的备份和还原
- network policy 的创建
- ingress 的创建