indra@sc-test-nfs-01:~$ ceph status
cluster d3dc01a3-c38d-4a85-b040-3015455246e6
health HEALTH_WARN
too many PGs per OSD (512 > max 300)
crush map has legacy tunables (require bobtail, min is firefly)
crush map has straw_calc_version=0
monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
osdmap e100: 3 osds: 3 up, 3 in
pgmap v965721: 704 pgs, 6 pools, 188 MB data, 59 objects
61475 MB used, 1221 GB / 1350 GB avail
704 active+clean
To resolve the problem is very simple, use below command:
ceph osd crush tunables optimal
indra@sc-test-nfs-01:~$ ceph osd crush tunables optimal
adjusted tunables profile to optimal
Ceph status after the adjustment:
indra@sc-test-nfs-01:~$ ceph status
cluster d3dc01a3-c38d-4a85-b040-3015455246e6
health HEALTH_WARN
too many PGs per OSD (512 > max 300)
monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
osdmap e101: 3 osds: 3 up, 3 in
pgmap v965764: 704 pgs, 6 pools, 188 MB data, 59 objects
61481 MB used, 1221 GB / 1350 GB avail
704 active+clean
The warning messages related to the crush map are gone. Yay!
PS. Ignore the "too many PGs per OSD" warning, due to I have limited number of OSDs and too many pools and PGs on my test environment.
Source: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10225.html
Reference: http://docs.ceph.com/docs/master/rados/operations/crush-map/
No comments:
Post a Comment