nfs 作为存储,pvc 和 pv 都是 bound 状态,而且还测试过 pod 都能够向 nfs 里面写入文件,但搭建 mysql 就报错:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 25m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
Normal Scheduled 25m default-scheduler Successfully assigned lzipant/mysql-0 to arch124
Normal Pulling 25m kubelet Pulling image "mysql:5.7"
Normal Pulled 25m kubelet Successfully pulled image "mysql:5.7" in 3.099960834s
Normal Created 24m (x5 over 25m) kubelet Created container init-mysql
Normal Pulled 24m (x4 over 25m) kubelet Container image "mysql:5.7" already present on machine
Normal Started 24m (x5 over 25m) kubelet Started container init-mysql
Warning BackOff 43s (x117 over 25m) kubelet Back-off restarting failed container
各项 yaml 如下:
configMap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
namespace: lzipant
labels:
app: mysql
data:
master.cnf: |
[mysqld]
log-bin
slave.cnf: |
[mysqld]
super-read-only
service.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
namespace: lzipant
labels:
app: mysql
data:
master.cnf: |
[mysqld]
log-bin
slave.cnf: |
[mysqld]
super-read-only
statefulSet.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
namespace: lzipant
spec:
selector:
matchLabels:
# 适用于所有 label 包括 app=mysql 的 pod
app: mysql
serviceName: mysql
replicas: 3
# 定义 pod
template:
metadata:
labels:
app: mysql
spec:
# 在 init 容器中为 pod 中的 mysql 容器做初始化工作
initContainers:
# init-mysql 容器会分配 pod 的角色是 master 还是 slave, 然后生成配置文件
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# 生成 server-id
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# 写入 server-id
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# server-id 尾号为 0 作为 master, 否则作为 slave
# 这里 cp 到 pod 中的 cnf 会与 server-id.cnf 一块被 mysql.cnf include 进去
# 这里指定了序号为 0 的 pod 会作为 master 节点提供写, 其他 pod 作为 slave 节点提供读
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
# 将 conf 临时卷挂载到了 pod 的 /mnt/conf.d 路径下
- name: conf
mountPath: /mnt/conf.d
# 这里把 ConfigMap 中的配置怪哉到了 pod 的 /mnt/config-map 路径下
- name: config-map
mountPath: /mnt/config-map
# 这一个 init 容器会正在 pod 启动时假定之前已经存在数据, 并将之前的数据复制过来, 以确保新 pod 中有数据可以提供使用
- name: clone-mysql
# xtrabackup 是一个开源工具, 用于克隆 mysql 的数据
image: ist0ne/xtrabackup:latest
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
# 实际运行 mysqld 服务的 mysql 容器
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: "abcdef"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# 将 data 卷的 mysql 目录挂在到容器的 /var/lib/mysql
- name: mysql-data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
# 启动存活探针, 如果失败会重启 pod
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
# 启动就绪探针确保容器的运行正常, 如果有失败会将 pod 从 service 关联的 endpoint 中剔除
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
# init 结束后还会在启动一个 xtrabackup 容器作为 mysqld 容器的 sidecar 运行
- name: xtrabackup
image: ist0ne/xtrabackup:latest
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# 他会在启动时查看之前是否有数据克隆文件存在, 如果有那就去其他从节点复制数据, 如果没有就去主节点复制数据
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
# 将 data 卷的 mysql 目录挂在到容器的 /var/lib/mysql
- name: mysql-data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
volumes:
- name: conf
# pod 在节点上被移除时, emptyDir 会同时被删除
# emptyDir 一般被用作缓存目录, 这里用在 config
emptyDir: {}
- name: config-map
# ConfigMap 对象中存储的数据可以被 configMap 类型的卷引用, 然后被 Pod 中运行的容器使用
# 这里引用了前面定义了名称为 mysql 的 ConfigMap 对象
configMap:
name: mysql
volumeClaimTemplates:
# 这里面定义的是对 PVC 的模板, 这里没有单独为 mysql 创建 pvc, 而是动态创建的
- metadata:
name: mysql-data
namespace: lzipant
spec:
accessModes: ["ReadWriteOnce"]
# 如果没有配置默认的 storageClass 的话, 需要指定 storageClassName
storageClassName: managed-nfs-storage
resources:
requests:
storage: 5Gi
storageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
namespace: lzipant
provisioner: fuseim.pri/ifs # must match deployement env PROVISIONER_NAME
reclaimPolicy: Retain
这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。
V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。
V2EX is a community of developers, designers and creative people.