使用 cephadm 创建集群时将磁盘加入 osd 中报错,请教下各位巨佬

55 天前
 kevin123456

对磁盘做的操作: 1 、擦除了磁盘分区以及标签等 wipefs -a -f /dev/sdc ,并使用 blkid 查看磁盘信息,已验证磁盘信息都已清除。 2 、怀疑硬盘是否损坏,对磁盘进行分区格式化并挂载验证查验能正常写入和读取数据,磁盘正常,之后重新进行#1 ,问题依旧 执行 ceph orch daemon add osd ceph-2:/dev/sdc 报错,说是什么权限不行,可是我另外两台一样的操作,一样的 add osd 都是可以的,用户使用的 root 。报错如下: Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1809, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 183, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 474, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 119, in <lambda> wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731 File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 108, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/orchestrator/module.py", line 1100, in _daemon_add_osd raise_if_exception(completion) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 240, in raise_if_exception raise e RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/fd4f9c76-0c43-11ef-a0ee-37aaf28eb433/mon.ceph-2/config Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:d55fa1ab2bf753fab11fdd9f1bb9106ee5aecccdd0e9532ef0125bead1adf3c9 -e NODE_NAME=ceph-2 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/fd4f9c76-0c43-11ef-a0ee-37aaf28eb433:/var/run/ceph:z -v /var/log/ceph/fd4f9c76-0c43-11ef-a0ee-37aaf28eb433:/var/log/ceph:z -v /var/lib/ceph/fd4f9c76-0c43-11ef-a0ee-37aaf28eb433/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp92x1hz9n:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmptnkwq47u:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:d55fa1ab2bf753fab11fdd9f1bb9106ee5aecccdd0e9532ef0125bead1adf3c9 lvm batch --no-auto /dev/sdc --yes --no-systemd /usr/bin/docker: stderr --> passed data devices: 1 physical, 0 LVM /usr/bin/docker: stderr --> relative data size: 1.0 /usr/bin/docker: stderr Running command: /usr/bin/ceph-authtool --gen-print-key /usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new be074625-76ae-4343-a080-0b4a7df31bde /usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgcreate --force --yes ceph-fd0addc7-e222-4413-850d-ff99e750a4f9 /dev/sdc /usr/bin/docker: stderr stdout: Physical volume "/dev/sdc" successfully created. /usr/bin/docker: stderr stdout: Volume group "ceph-fd0addc7-e222-4413-850d-ff99e750a4f9" successfully created /usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvcreate --yes -l 142812 -n osd-block-be074625-76ae-4343-a080-0b4a7df31bde ceph-fd0addc7-e222-4413-850d-ff99e750a4f9 /usr/bin/docker: stderr stdout: Wiping xfs signature on /dev/ceph-fd0addc7-e222-4413-850d-ff99e750a4f9/osd-block-be074625-76ae-4343-a080-0b4a7df31bde. /usr/bin/docker: stderr stdout: Logical volume "osd-block-be074625-76ae-4343-a080-0b4a7df31bde" created. /usr/bin/docker: stderr Running command: /usr/bin/ceph-authtool --gen-print-key /usr/bin/docker: stderr Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 /usr/bin/docker: stderr Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-fd0addc7-e222-4413-850d-ff99e750a4f9/osd-block-be074625-76ae-4343-a080-0b4a7df31bde /usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 /usr/bin/docker: stderr Running command: /usr/bin/ln -s /dev/ceph-fd0addc7-e222-4413-850d-ff99e750a4f9/osd-block-be074625-76ae-4343-a080-0b4a7df31bde /var/lib/ceph/osd/ceph-2/block /usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap /usr/bin/docker: stderr stderr: got monmap epoch 3 /usr/bin/docker: stderr --> Creating keyring file for osd.2 /usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring /usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ /usr/bin/docker: stderr Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity None --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid be074625-76ae-4343-a080-0b4a7df31bde --setuser ceph --setgroup ceph /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.207+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.207+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.207+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.207+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.735+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-2//block: (13) Permission denied /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.735+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-2//block: (13) Permission denied /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.735+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-2//block: (13) Permission denied /usr/bin/docker: stderr stderr: 2024-05-08T01:49:19.735+0000 7f21171bf640 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-2//block: (13) Permission denied

340 次点击
所在节点    程序员
2 条回复
kevin123456
55 天前
补充:查看可用 osd 时,AVAILABLE 也为 yes
root@ceph-1:~# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
ceph-2 /dev/sdb hdd ServeRAID_M5210_600605b00d27dc702c88490a0cb95b33 1116G Yes 2m ago
ceph-2 /dev/sdc hdd ServeRAID_M5210_600605b00d27dc702c8849250e5e0aeb 557G Yes 2m ago
kevin123456
55 天前
已解决,原因是 apt install ceph-osd 软件,,,,remove 后就能正常 add osd 了

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/1038626

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX