CEPH 日记 1 - 如何移除OSD

本文基于 ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)


该 CEPH 实例 具体配置为:

2 节点 OSD, 每节点 Intel E5-2670, 32GB REG ECC.

OSD 0-10 为 Seaggate ST6000NM0034 SAS HDD 6TB x 11.

OSD 13-20 为 Intel P3700 NVMe 800GB x 2, 每个NVMe分为4个200GB的LVM卷, 以便充分利用NVMe的性能.

HBA 卡为 Fujitsu PRAID CP400i, 网卡为 Mellanox ConnectX-3 56G.


今天检查 CEPH 发现 有一个 OSD 接近要满了的状态.

[root@storage02-ib ~]# ceph osd status
+----+----------------------------+-------+-------+--------+---------+--------+---------+------------------------+
| id |            host            |  used | avail | wr ops | wr data | rd ops | rd data |         state          |
+----+----------------------------+-------+-------+--------+---------+--------+---------+------------------------+
| 0  | storage02-ib.lobj.eth6.org | 1599G | 3989G |    0   |     0   |    0   |     0   |       exists,up        |
| 1  | storage02-ib.lobj.eth6.org | 1625G | 3963G |    0   |     0   |    0   |     0   |       exists,up        |
| 2  | storage02-ib.lobj.eth6.org | 1666G | 3922G |    0   |     0   |    0   |     0   |       exists,up        |
| 3  | storage02-ib.lobj.eth6.org | 1817G | 3771G |    0   |     0   |    0   |     0   |       exists,up        |
| 4  | storage02-ib.lobj.eth6.org | 1868G | 3720G |    0   |     0   |    0   |     0   |       exists,up        |
| 5  | storage03-ib.lobj.eth6.org | 1685G | 3903G |    0   |     0   |    0   |     0   |       exists,up        |
| 6  | storage03-ib.lobj.eth6.org | 1686G | 3902G |    0   |     0   |    0   |     0   |       exists,up        |
| 7  | storage03-ib.lobj.eth6.org | 1153G | 4435G |    0   |     0   |    1   |     0   |       exists,up        |
| 8  | storage03-ib.lobj.eth6.org | 1374G | 4214G |    0   |     0   |    0   |     0   |       exists,up        |
| 9  | storage03-ib.lobj.eth6.org | 2098G | 3490G |    0   |     0   |    0   |     0   |       exists,up        |
| 10 | storage03-ib.lobj.eth6.org | 1715G | 3873G |    0   |     0   |    0   |     0   |       exists,up        |
| 13 | storage02-ib.lobj.eth6.org |  172G | 13.7G |    0   |     0   |    0   |     0   | backfillfull,exists,up |
| 14 | storage02-ib.lobj.eth6.org | 43.9G |  142G |    0   |     0   |    0   |     0   |       exists,up        |
| 15 | storage02-ib.lobj.eth6.org | 79.2G |  106G |    0   |     0   |    0   |     0   |       exists,up        |
| 16 | storage02-ib.lobj.eth6.org | 10.1G |  176G |    0   |     0   |    0   |     0   |       exists,up        |
| 17 | storage03-ib.lobj.eth6.org |  102G | 83.4G |    0   |     0   |    0   |     0   |       exists,up        |
| 18 | storage03-ib.lobj.eth6.org | 10.2G |  176G |    0   |     0   |    0   |     0   |       exists,up        |
| 19 | storage03-ib.lobj.eth6.org | 10.1G |  176G |    0   |     0   |    0   |     0   |       exists,up        |
| 20 | storage03-ib.lobj.eth6.org |  137G | 49.1G |    0   |     0   |    0   |     0   |       exists,up        |
+----+----------------------------+-------+-------+--------+---------+--------+---------+------------------------+复制代码

情况如上, ceph osd status 命令可以发现 OSD 13 已经接近要满了的状态.

[root@storage02-ib ~]# ceph -s
  cluster:
    id:     0f7be0a4-2a05-4658-8829-f3d2f62579d2
    health: HEALTH_WARN
            1 backfillfull osd(s)
            5 pool(s) backfillfull
            367931/4527742 objects misplaced (8.126%)
 
  services:
    mon: 3 daemons, quorum storage01-ib,storage02-ib,storage03-ib
    mgr: storage01-ib(active), standbys: storage03-ib, storage02-ib
    osd: 19 osds: 19 up, 18 in; 41 remapped pgs
    rgw: 2 daemons active
 
  data:
    pools:   5 pools, 288 pgs
    objects: 2.26 M objects, 8.6 TiB
    usage:   18 TiB used, 43 TiB / 61 TiB avail
    pgs:     367931/4527742 objects misplaced (8.126%)
             247 active+clean
             40  active+remapped+backfill_wait
             1   active+remapped+backfilling
 
  io:
    client:   1.0 MiB/s rd, 65 op/s rd, 0 op/s wr复制代码

ceph -s 命令也可以看见提示 1 backfillfull osd(s), 由于 OSD pool 没有指定与 OSD 的 map 关系, 所以默认所有OSD都承载这些 pool 的, 所以也提示了 5 pool(s) backfillfull.

之所以还会提示 objects misplaced 是因为之前维护替换了一块硬盘, 正在数据恢复中.


由于这个满了的 OSD 正好是之前还没来得及下掉的 NVMe 存储, NVMe 想用做其他用途, 所以这次准备干脆把 NVMe 下掉了.

下掉 13 - 20 这几个 NVMe 的 OSD, CEPH 会自动将数据平衡到其他 OSD 中. 所以我们只需要先 mark OSD out, 然后删除 OSD, 最后移除硬件即可.


在移除过程一定先提前计算好容量是否足够, 如果下掉过多 OSD 导致整体容量不足, 将会导致 OSD 趋近于满的情况, 会造成集群读写性能下降甚至完全不能读写.

在移除的过程中, 也会占用系统的磁盘 IO 和网络带宽, 因此如果是线上业务, 一定规划好移除时间, 避免影响业务.


首先, 我们移除 OSD 13, 14 作为演示.

[root@storage02-ib ~]# ceph osd out 13 14;
marked out osd.13 osd.14 复制代码


  cluster:
    id:     0f7be0a4-2a05-4658-8829-f3d2f62579d2
    health: HEALTH_WARN
            258661/4527742 objects misplaced (5.713%)
            Degraded data redundancy: 22131/4527742 objects degraded (0.489%), 3 pgs degraded
 
  services:
    mon: 3 daemons, quorum storage01-ib,storage02-ib,storage03-ib
    mgr: storage01-ib(active), standbys: storage03-ib, storage02-ib
    osd: 19 osds: 19 up, 11 in; 36 remapped pgs
    rgw: 2 daemons active
 
  data:
    pools:   5 pools, 288 pgs
    objects: 2.26 M objects, 8.6 TiB
    usage:   18 TiB used, 43 TiB / 61 TiB avail
    pgs:     22131/4527742 objects degraded (0.489%)
             258661/4527742 objects misplaced (5.713%)
             252 active+clean
             28  active+remapped+backfill_wait
             5   active+remapped+backfilling
             2   active+undersized+degraded+remapped+backfill_wait
             1   active+undersized+degraded+remapped+backfilling
 
  io:
    recovery: 90 MiB/s, 22 objects/s复制代码

可以看到提示 258661/4527742 objects misplaced, 22131/4527742 objects degraded, 这些都是正在自动调整的数据.

等待调整完毕后, 我们依次下掉其他的OSD即可.


然后到对应的 OSD 的机器上, 停止相应的 OSD daemon.

[root@storage02-ib ~]# systemctl stop ceph-osd@20
[root@storage02-ib ~]# 复制代码

最后确认 daemon 都停止后, 执行 purge, 从集群中彻底移除该 OSD.

[root@storage02-ib ~]# ceph osd purge 20 --yes-i-really-mean-it复制代码


如果在 ceph.conf 文件中也配置了 OSD 的信息, 也需要移除掉, 然后重新分发配置.

最后, 停机拔掉硬件就可以了.


  • Reference

docs.ceph.com/docs/master…


猜你喜欢

转载自juejin.im/post/5d3f92a55188255d4c70d6f0