Tuesday, June 07, 2016

Moving MySQL Data Folder on cPanel Environment

According to an advice from a cPanel engineer, the best way to move MySQL data folder to a different folder (e.g. on a different partition with more available disk space) on a cPanel / CentOS environment is to create symbolic link rather than modifying the my.cnf file.

Presuming that the original MySQL data folder is located on /var/lib/mysql and the partition with more available disk space is mounted as /home, these are the steps on how to move the MySQL data folder from /var/lib/mysql to /home/var_mysql/mysql.

1. Backup the whole MySQL database, just in case.

mkdir /home/backup (if it doesn't exist yet)
mysqldump --all-databases | gzip > /home/backup/alldatabases.sql.gz

2. Stop MySQL service and verify that it's stopped.

/etc/init.d/mysql stop
/etc/init.d/mysql status

3. Create destination folder, move the folder and all the files and subfolders from existing to new destination folder, change permission settings and create symbolic link.

mkdir /home/var_mysql
mv /var/lib/mysql /home/var_mysql
chown -R mysql:mysql /home/var_mysql/mysql
ln -s /home/var_mysql/mysql /var/lib/mysql


4. Start back MySQL service, and verify that it's started.

/etc/init.d/mysql start
/etc/init.d/mysql status

That's all. :)

Wednesday, June 01, 2016

Ceph - Crush Map has Legacy Tunables

I upgraded Ceph from the old Dumpling version to the latest Jewel version. In addition to the OSDs not able to start up due to some permission settings on /var/lib/ceph (we need to change the permission settings recursively to ceph:ceph), I am also having this HEALTH_WARN messages:

indra@sc-test-nfs-01:~$ ceph status
    cluster d3dc01a3-c38d-4a85-b040-3015455246e6
     health HEALTH_WARN
            too many PGs per OSD (512 > max 300)
            crush map has legacy tunables (require bobtail, min is firefly)
            crush map has straw_calc_version=0

     monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
            election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
     osdmap e100: 3 osds: 3 up, 3 in
      pgmap v965721: 704 pgs, 6 pools, 188 MB data, 59 objects
            61475 MB used, 1221 GB / 1350 GB avail
                 704 active+clean

To resolve the problem is very simple, use below command:

ceph osd crush tunables optimal

indra@sc-test-nfs-01:~$ ceph osd crush tunables optimal
adjusted tunables profile to optimal

Ceph status after the adjustment:

indra@sc-test-nfs-01:~$ ceph status
    cluster d3dc01a3-c38d-4a85-b040-3015455246e6
     health HEALTH_WARN
            too many PGs per OSD (512 > max 300)
     monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
            election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
     osdmap e101: 3 osds: 3 up, 3 in
      pgmap v965764: 704 pgs, 6 pools, 188 MB data, 59 objects
            61481 MB used, 1221 GB / 1350 GB avail
                 704 active+clean

The warning messages related to the crush map are gone. Yay!

PS. Ignore the "too many PGs per OSD" warning, due to I have limited number of OSDs and too many pools and PGs on my test environment.

Source: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10225.html
Reference: http://docs.ceph.com/docs/master/rados/operations/crush-map/