解决Kubernetes Pod无法访问外部网络
问题与排查 目前业务使用K8S中部署java的微服务,但在使用过程中,发现某个kubernetes的pod无法访问api.weixin.qq.com,尝试通过nslookup, nc来测试pod是否可以连通服务 ...
问题与排查 目前业务使用K8S中部署java的微服务,但在使用过程中,发现某个kubernetes的pod无法访问api.weixin.qq.com,尝试通过nslookup, nc来测试pod是否可以连通服务 ...
Preparation To setup a k8s cluster, I will prepare 4 machines with the following network setting: hostname ip role gm-mini 192.168.31.199 HAProxy gm-red 192.168.31.200 k8s master gm-green 192.168.31.201 k8s master gm-blue 192.168.31.202 k8s worker gm-orange 192.168.31.203 k8s worker 1. create master on gm-red sudo kubeadm init \ --apiserver-advertise-address=192.168.31.200 \ --image-repository=registry.aliyuncs.com/google_containers \ --kubernetes-version=v1.29.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket=unix:///run/containerd/containerd.sock \ --control-plane-endpoint=192.168.31.199:6443 \ --upload-certs 2. Apply CNI network plugin kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml 3. Join as control-panel nodes (gm-green) sudo kubeadm join 192.168.31.199:6443 \ --token 7yszg3.su99ir6t8m9o8ttr \ --discovery-token-ca-cert-hash sha256:xxxx \ --control-plane \ --certificate-key yyyyy 4. Join as worker node (gm-blue, gm-orange) sudo kubeadm join 192.168.31.199:6443 \ --token aaa.bbb \ --discovery-token-ca-cert-hash sha256:xxxx
Create user CSR openssl genrsa -out ishare.key 2048 openssl req -new -key ishare.key -out ishare.csr Approve CSR openssl x509 -req -in ishare.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out ishare.crt -days 500 Create role kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: ishare name: ishare-admin rules: - apiGroups: ["", "extensions", "apps"] resources: - "deployments" - "pods" - "services" - "statefulsets" - "secret" - "configmap" - "persistentvolumes" - "persistentvolumeclaims" verbs: - "get" - "list" - "watch" - "create" - "update" - "patch" - "delete" - apiGroups: ["storage.k8s.io"] resources: - "storageclasses" verbs: - "get" - "list" - "watch" Create role binding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ishare-rolebinding namespace: ishare subjects: - kind: User name: ishare apiGroup: "" roleRef: kind: Role name: ishare-admin apiGroup: "" Create .kube/config Login to the user to authorized ...
Catalog Pod Service ReplicaSet Deployment StatefulSet Volumes ConfigMap
container如果需要外界访问或被其他container访问,则需要借助Kubernetes Service来完成转发。 创建Service pod-kubia.yaml apiVerison: v1 kind: Pod spec: containers: - name: kubia ports: - name: http containerPort: 8080 - name: https containerPort: 8443 service-kubia.yaml ...
StatefulSet介绍 StatefulSet特点: 每个Pod拥有一个唯一确定的身份标识 StatefulSet确保不会有两个同样标识的pod存在(at-most-one) StatefulSet需要每个Pod都创建一个headless Service,用于给pod提供DNS解析,hostname格式为: <pod-name>.<service-name>.default.svc.cluster.local Scaling StatefulSet Scaling down 每次减少StatefulSet的replica数量时,都可以预知哪个pod被减少,例如SS有3个replica,分别为:pod-0, pod-1, pod-2,如果replica减少为2,则首先会删除pod-2;如果replica减少为1,则继续会删除pod-1。 ...
This article will try to explain how to setup a k8s cluster with three nodes with aliyun ecs. 1. Environment Aliyun ECS with Ubuntu 22.04 2 cores + 4G ECS Docker 24.0.6 Server has been install docker + containerd Because containerd installed with docker, so we can use conatainerd as the runtime of k8s, but we need to enable CRI interface in containerd. 2. Init MASTER NODE add apt source ...
Volume 是依赖于Pod而存在的对象,不能单独被创建,pod的volume对pod的所有容器都可见。 Volume简介 Volume类型 emptyDir 用于存储透明的数据(随着pod被移除而移除) hostPath 使用宿主机的文件系统作为volume gitRepo 使用git仓库作为volume nfs 使用NFS文件系统 gcePersistentDisk/awsElasticBlockStore/azureDisk 使用GCE/AWS/Azure的磁盘 cinder/cephfs/iscsi 使用网络存储系统 configMap/secret/downwardAPI 使用k8s的特殊对象作为volume persistentVolumeClaim 动态创建文件系统 使用Volumes在容器间共享数据 emptyDir volume gitRepo volume hostPath volume pod与底层存储的解耦 PersistentVolumes和PersistentVolumeClaims 创建PV apiVersion: v1 kind: PersistentVolume metadata: name: mongodb-pv spec: capacity: storage: 1Gi accessMode: # 一个客户端读写 - ReadWriteOnce # 多个客户端只读 - ReadOnlyMany # PVC被删除后,使用Retain保留PV的策略 persistentVolumeReclaimPolicy: Retain # 底层使用GCE Disk gcePersistentDisk: pdName: mongodb fsType: ext4 通过PVC动态声明创建PV apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongodb-pvc spec: resources: requests: storage: 1Gi accessModes: # 只允许一个客户端读写 - ReadWriteOnce storageClassName: "" 在Pod中使用PVC apiVersion: v1 kind: Pod metadata: name: mongodb spec: containers: - image: mongo name: mongodb volumeMounts: - name: mongodb-data mountPath: /data/db ports: - containerPort: 27017 protocol: TCP volumes: - name: mongodb-data persistentVolumeClaim: className: mongodb-pvc PV回收 PV的状态分为: ...