Ceph分布式存储系列(五):pool池限制大小的方式

pool池不能和卷一样创建的时候指定大小,但是提供了限制配额的功能

两种限制方式:

  • 限制池内的object数 (max_objects)
  • 限制池内的存储数据大小(max_bytes)

简单来说,其实就几条命令而已

查看池配额设置
$ ceph osd pool get-quota {
    
    pool_name}

限制池中存放object的个数
$ ceph osd pool set-quota {
    
    pool_name} max_objects {
    
    number}

限制池中最大存储的数据量大小
$ ceph osd pool set-quota {
    
    pool_name} max_bytes {
    
    number}

下面就来简单测试一下

一、限制池内object个数

创建一个pg数为8的pool池-test

root@ceph-node1 ~]# ceph osd pool create test 8
pool 'test' created
[root@ceph-node1 ~]# ceph osd pool application enable test rbd
enabled application 'rbd' on pool 'test'

查看集群状态和池的使用量

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_OK
....
[root@ceph-node1 ~]# ceph df |grep POOLS -A 2
POOLS:
    POOL     ID     PGS     STORED     OBJECTS     USED     %USED     MAX AVAIL
    test      9       8        0 B           0      0 B         0       8.7 GiB

查看当前池的配额

[root@ceph-node1 ~]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A

配置max_objects,限制object数量

[root@ceph-node1 ~]# ceph osd pool set-quota test max_objects 10
set-quota max_objects = 10 for pool test
[root@ceph-node1 ~]#
[root@ceph-node1 ~]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: 10 objects
  max bytes  : N/A

创建10M大小的测试文件,手动传object进入测试

[root@ceph-node1 mnt]# dd if=/dev/zero of=/mnt/file bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.0452471 s, 232 MB/s

将文件导入pool池中(起object名字为object-1)
[root@ceph-node1 mnt]# rados put object-1 file -p test

查看pool池内的object
[root@ceph-node1 mnt]# rados ls -p test
object-1

一共创建10个object

循环创建
[root@ceph-node1 mnt]# for i in {2..10}; do rados put object-$i file -p test; done

查看所有object
[root@ceph-node1 mnt]# rados ls -p test
object-4
object-10
object-3
object-5
object-7
object-1
object-2
object-8
object-6
object-9

以上,代表能够创建成功,稍等片刻后查看ceph状态

查看ceph状态和存储

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_WARN
            1 pool(s) full
....
[root@ceph-node1 mnt]# ceph df |grep POOLS -A 2
POOLS:
    POOL     ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL
    test      9       8     100 MiB          10     300 MiB      1.12       8.6 GiB

可以看到状态信息中已经有了一个警告: 1 pool(s) full,代表一个池已经满了

提示:
STORED 代表存储数据的真实大小
USED 代表一共使用了多大的空间(因为这里创建的池是默认的三副本,就是复制了三份,所以100x3,就是300M)

我们再试下添加新object和删除

[root@ceph-node1 mnt]# rados put object-11 file -p test
2021-01-20 17:05:28.388 7ff1b55399c0  0 client.170820.objecter  FULL, paused modify 0x55f2d92ae380 tid 0

甚至连删除都无法删除
[root@ceph-node1 mnt]# rados rm object-10 -p test
2021-01-20 17:05:40.149 7f43dac589c0  0 client.170835.objecter  FULL, paused modify 0x5624ef387bb0 tid 0

想恢复,很简单,调整下max_objects值为0即可

0为默认值,即代表不做限制

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_objects 0
set-quota max_objects = 0 for pool test
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A
[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_OK

二、限制池内的存储数据大小

在上面的基础上进行试验

删除上边测试用的object

[root@ceph-node1 mnt]# for i in {1..10}; do rados rm object-$i -p test; done;
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# rados ls -p test
[root@ceph-node1 mnt]#

调整max_bytes,限制存储数据大小

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_bytes 100M
set-quota max_bytes = 104857600 for pool test
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : 100 MiB

创建100M的测试文件导入pool池中

[root@ceph-node1 mnt]# dd if=/dev/zero of=/mnt/file_100 bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 1.00625 s, 104 MB/s
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ll -h file_100
-rw-r--r--. 1 root root 100M Jan 20 17:50 file_100
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# rados put object-1 file_100 -p test
[root@ceph-node1 mnt]#

命令执行完之后,稍等一会,查看集群状态

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_WARN
            1 pool(s) full
[root@ceph-node1 mnt]# ceph df
POOLS:
    POOL     ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL
    test      9       8     100 MiB           1     300 MiB      1.12       8.7 GiB

这时如果再有数据进入,就会报错了

[root@ceph-node1 mnt]# rados put object-2 file_100 -p test
2021-01-20 17:54:12.740 7fa9704ce9c0  0 client.173479.objecter  FULL, paused modify 0x55e57f97d380 tid 0

恢复的话,和max_object一样,设置为0即可

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_bytes 0
set-quota max_bytes = 0 for pool test
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A

如果想配置的更精细一点,那么也可以两个参数都配置

End……

猜你喜欢

转载自blog.csdn.net/weixin_43860781/article/details/112907361