CentOS6.5Ceph部署测试


1     概述

Ceph是一个分布式文件系统,在维持POSIX兼容性的同时加入了复制和容错功能。Ceph最大的特点是分布式的元数据服务器,通过CRUSH(Controlled Replication Under Scalable Hashing)这种拟算法来分配文件的location。Ceph的核心是RADOS(ReliableAutonomicDistributed Object Store),一个对象集群存储,本身提供对象的高可用、错误检测和修复功能。

Ceph生态系统架构可以划分为四部分:

•client:客户端(数据用户)。client向外export出一个POSIX文件系统接口,供应用程序调用,并连接mon/mds/osd,进行元数据及数据交互;最原始的client使用FUSE来实现的,现在写到内核里面了,需要编译一个ceph.ko内核模块才能使用。

•mon:集群监视器,其对应的daemon程序为cmon(Ceph Monitor)。mon监视和管理整个集群,对客户端export出一个网络文件系统,客户端可以通过mount -t ceph monitor_ip:/ mount_point或者ceph-fuse -m monitor_ip:6789mount_point命令来挂载Ceph文件系统。根据官方的说法,3个mon可以保证集群的可靠性。

•mds:元数据服务器,其对应的daemon程序为cmds(Ceph Metadata Server)。Ceph里可以有多个MDS组成分布式元数据服务器集群,就会涉及到Ceph中动态目录分割来进行负载均衡。

•osd:对象存储集群,其对应的daemon程序为cosd(Ceph Object StorageDevice)。osd将本地文件系统封装一层,对外提供对象存储的接口,将数据和元数据作为对象存储。这里本地的文件系统可以是ext2/3,但Ceph认为这些文件系统并不能适应osd特殊的访问模式,它们之前自己实现了ebofs,而现在Ceph转用btrfs。

Ceph支持成百上千甚至更多的节点,以上四个部分最好分布在不同的节点上。当然,对于基本的测试,可以把mon和mds装在一个节点上,也可以把四个部分全都部署在同一个节点上。

2     环境准备

2.1    版本信息

本文档对应环境如下:

扫描二维码关注公众号,回复: 5640276 查看本文章

系统版本:CentOS release 6.5 (Final)

Linux c01 2.6.32-431.el6.x86_64

ceph软件版本:

ceph-0.80.5-0.el6.x86_64

ceph-deploy-1.5.10-0.noarch

ceph-0.81.0-5.el6.x86_64

ceph-fuse-0.80.5-0.el6.x86_64

两台服务器:

c01 192.168.11.111

c02 192.168.12.112

节点分布

MON节点:192.168.11.111 c01

MDS节点:192.168.11.111 c01

OSD0节点:192.168.12.112 c02

OSD1节点:192.168.12.112 c02

Client 其他Server

2.2    提前配置

ceph需要通过hostname进行通信,因此需要配置相同的/etc/hosts

除了client节点其他节点均需要ssh key互相认证

关闭selinux;关闭iptables或允许6789端口、配置ssh互相认证,配置hosts

/etc/init.d/iptablesstop &&chkconfig iptables off

sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config

setenforce0

3     部署Ceph集群

3.1   安装ceph-deploy

ceph-deploy为Ceph官方提供的安装部署工具,使用ceph-deploy的前提是配置统一hosts与ssh key互相认证。本次ceph-deploy安装在c01节点上。

#rpm -ivh http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm

rpm -ivh http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

yum install ceph-deploy -y

部署之前确保ceph每个节点没有ceph数据包(先清空之前所有的ceph数据,如果是新装不用执行此步骤,如果是重新部署的话也执行下面的命令),其中c01、c02为主机名

 [root@c01ceph]# ceph-deploy purgedata c01 c02 

 [root@c01ceph]# ceph-deploy forgetkeys

 [root@c01 ceph]#ceph-deploy purge c01 c02 

注:如果是新环境,没有任何数据的无需执行

3.2   批量安装ceph

在c01节点使用ceph-deploy工具向各个节点安装ceph

[root@c01 ceph]# ceph-deploy install c01 c02

(如果安装过程中由于网络原因中断,可以单独为某台服务器安装ceph-deploy install <hostname> 即可)

注:也可以纯手动安装ceph,安装过程如下:

yum cleanall

yum -yinstall wget

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

rpm -Uvh--replacepkgs epel-release-6*.rpm

yum -yinstall yum-plugin-priorities

rpm --importhttps://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

rpm -Uvh--replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

yum -yinstall ceph

ceph--version

[c01][DEBUG] ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)

如果出现如下问题

[host-192-168-44-100][WARNIN]   file /etc/yum.repos.d/ceph.repo from installof ceph-release-1-0.el6.noarch conflicts with file from packageceph-release-1-0.el6.noarch

[host-192-168-44-100][ERROR ] RuntimeError:command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed toexecute command: rpm -Uvh --replacepkgshttp://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

则删除刚才安装的ceph-release包重新执行即可

rpm -e ceph-release

安装过程见附录

3.3       配置mon节点

3.3.1    创建mon节点

使用ceph-deploy new c01      (执行这条命令后c01都作为了monitor节点,多个mon节点可以在后面加入hostname多个实现互备如: ceph-deploynew c01 c02 c03)

[root@c01 ceph]# cd/etc/ceph/

[root@c01 ceph]# ceph-deploynew c01

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy newc01

[ceph_deploy.new][DEBUG ] Creating new clusternamed ceph

[ceph_deploy.new][DEBUG ] Resolving host c01

[ceph_deploy.new][DEBUG ] Monitor c01 at192.168.11.111

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[ceph_deploy.new][DEBUG ] Monitor initialmembers are ['c01']

[ceph_deploy.new][DEBUG ] Monitor addrs are['192.168.11.111']

[ceph_deploy.new][DEBUG ] Creating a random monkey...

[ceph_deploy.new][DEBUG ] Writing initialconfig to ceph.conf...

[ceph_deploy.new][DEBUG ] Writing monitorkeyring to ceph.mon.keyring...

查看生成的配置

[root@c01 ceph]# ls

ceph.conf ceph.log  ceph.mon.keyrin

[root@c01 ceph]# cat ceph.conf

[global]

auth_service_required = cephx

filestore_xattr_use_omap = true

auth_client_required = cephx

auth_cluster_required = cephx

mon_host = 192.168.11.111

mon_initial_members = c01

fsid = 209c6414-a659-487e-b84e-22e1f6f29cd1

 

[root@c01 ceph]#

如果出现下列提示

[c01][WARNIN] Traceback (most recent calllast):

[c01][WARNIN]  File "/usr/bin/ceph", line 53, in <module>

[c01][WARNIN]     import argparse

[c01][WARNIN] ImportError: No module named argparse

[node1][ERROR ] RuntimeError: command returnednon-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed toexecute command: ceph --version

解决方法是:在报错的节点上执行安装python argparse模块

[root@c01 ceph]#yum install *argparse* -y

3.3.2    修改默认osd大小

编辑mon节点的ceph配置文件,把下面的配置放入ceph.conf中

如果只有1个osd节点那么下面2个配置都设置成1。否则设置osd pool default size = 2即可

[root@c01 ceph]# vim /etc/ceph/ceph.conf

osd pool default size = 1

osd pool default min size = 1

3.3.3    修改安全认证

为了更方便得让客户端挂载,使用none无需使用cephx认证。

[root@c01ceph]# vim /etc/ceph/ceph.conf

auth_service_required= none

auth_client_required= none

auth_cluster_required= none

authsupported = none

3.3.4    初始化mon节点

添加初始监控节点并收集密钥(新的ceph-deploy v1.1.3以上的版本需要此配置)。

[root@c01ceph]# ceph-deploy --overwrite-conf mon create-initial

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy--overwrite-conf mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon,cluster ceph hosts c01

[ceph_deploy.mon][DEBUG ] detecting platformfor host c01 ...

[c01][DEBUG ] connected to host: c01

[c01][DEBUG ] detect platform information fromremote host

[c01][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.5 Final

[c01][DEBUG ] determining if provided host hassame hostname in remote

[c01][DEBUG ] get remote short hostname

[c01][DEBUG ] deploying mon to c01

[c01][DEBUG ] get remote short hostname

[c01][DEBUG ] remote hostname: c01

[c01][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

[c01][DEBUG ] create the mon path if it doesnot exist

[c01][DEBUG ] checking for done path:/var/lib/ceph/mon/ceph-c01/done

[c01][DEBUG ] done path does not exist:/var/lib/ceph/mon/ceph-c01/done

[c01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-c01.mon.keyring

[c01][DEBUG ] create the monitor keyring file

[c01][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i c01 --keyring/var/lib/ceph/tmp/ceph-c01.mon.keyring

[c01][DEBUG ] ceph-mon: mon.noname-a192.168.11.111:6789/0 is local, renaming to mon.c01

[c01][DEBUG ] ceph-mon: set fsid to28dc2c77-7e70-4fa5-965b-1d78ed72d18a

[c01][DEBUG ] ceph-mon: created monfs at/var/lib/ceph/mon/ceph-c01 for mon.c01

[c01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-c01.mon.keyring

[c01][DEBUG ] create a done file to avoidre-doing the mon deployment

[c01][DEBUG ] create the init path if it doesnot exist

[c01][DEBUG ] locating the `service`executable...

[c01][INFO ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf startmon.c01

[c01][DEBUG ] === mon.c01 ===

[c01][DEBUG ] Starting Ceph mon.c01 on c01...

[c01][DEBUG ] Starting ceph-create-keys onc01...

[c01][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.c01.asokmon_status

[c01][DEBUG ]********************************************************************************

[c01][DEBUG ] status for monitor: mon.c01

[c01][DEBUG ] {

[c01][DEBUG ]  "election_epoch": 2,

[c01][DEBUG ]  "extra_probe_peers": [],

[c01][DEBUG ]  "monmap": {

[c01][DEBUG ]     "created": "0.000000",

[c01][DEBUG ]     "epoch": 1,

[c01][DEBUG ]     "fsid":"28dc2c77-7e70-4fa5-965b-1d78ed72d18a",

[c01][DEBUG ]     "modified":"0.000000",

[c01][DEBUG ]     "mons": [

[c01][DEBUG ]       {

[c01][DEBUG ]         "addr":"192.168.11.111:6789/0",

[c01][DEBUG ]         "name": "c01",

[c01][DEBUG ]         "rank": 0

[c01][DEBUG ]       }

[c01][DEBUG ]     ]

[c01][DEBUG ]  },

[c01][DEBUG ]  "name": "c01",

[c01][DEBUG ]   "outside_quorum": [],

[c01][DEBUG ]  "quorum": [

[c01][DEBUG ]     0

[c01][DEBUG ]  ],

[c01][DEBUG ]  "rank": 0,

[c01][DEBUG ]  "state": "leader",

[c01][DEBUG ]  "sync_provider": []

[c01][DEBUG ] }

[c01][DEBUG ] ********************************************************************************

[c01][INFO ] monitor: mon.c01 is running

[c01][INFO ] Running command: ceph --cluster=ceph --admin-daemon/var/run/ceph/ceph-mon.c01.asok mon_status

[ceph_deploy.mon][INFO  ] processing monitor mon.c01

[c01][DEBUG ] connected to host: c01

[c01][INFO ] Running command: ceph --cluster=ceph --admin-daemon/var/run/ceph/ceph-mon.c01.asok mon_status

[ceph_deploy.mon][INFO  ] mon.c01 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] all initial monitors are running and haveformed quorum

[ceph_deploy.mon][INFO  ] Running gatherkeys...

[ceph_deploy.gatherkeys][DEBUG ] Haveceph.client.admin.keyring

[ceph_deploy.gatherkeys][DEBUG ] Haveceph.mon.keyring

[ceph_deploy.gatherkeys][DEBUG ] Checking c01for /var/lib/ceph/bootstrap-osd/ceph.keyring

[c01][DEBUG ] connected to host: c01

[c01][DEBUG ] detect platform information fromremote host

[c01][DEBUG ] detect machine type

[c01][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Gotceph.bootstrap-osd.keyring key from c01.

[ceph_deploy.gatherkeys][DEBUG ] Checking c01for /var/lib/ceph/bootstrap-mds/ceph.keyring

[c01][DEBUG ] connected to host: c01

[c01][DEBUG ] detect platform information fromremote host

[c01][DEBUG ] detect machine type

[c01][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Gotceph.bootstrap-mds.keyring key from c01.

可以看到在/etc/ceph目录下多出了3个keyring文件

[root@c01ceph]# ll -lh

total172K

-rw-r--r--.1 root root   71 Aug 13 14:27ceph.bootstrap-mds.keyring

-rw-r--r--.1 root root   71 Aug 13 14:27ceph.bootstrap-osd.keyring

-rw-------.1 root root   63 Aug 13 14:26ceph.client.admin.keyring

3.4   配置osd节点

3.4.1    配置OSD节点硬盘设备

添加c02节点作为osd,进入c02节点查看未分配的分区

[root@c01 ceph]# ssh c02

Last login: Wed Aug 13 09:43:43 2014 from192.168.0.108

[root@c02 ~]# fdisk -l

 

Disk /dev/vda: 42.9 GB, 42949672960 bytes

16 heads, 63 sectors/track, 83220 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

Sector size (logical/physical): 512 bytes / 512bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00024e72

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/vda1               3        4066    2048000   82  Linux swap / Solaris

Partition 1 does not end on cylinder boundary.

/dev/vda2  *        4066       31208   13679616   83  Linux

Partition 2 does not end on cylinder boundary.

 

Disk /dev/vdb: 107.4 GB, 107374182400 bytes

16 heads, 63 sectors/track, 208050 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

Sector size (logical/physical): 512 bytes / 512bytes

I/O size (minimum/optimal): 512 bytes / 512bytes

Disk identifier: 0x0b9699d9

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/vdb1               1      104025   52428568+  83  Linux

/dev/vdb2          104026      208050   52428600   83  Linux

 

Disk /dev/vdc: 161.1 GB, 161061273600 bytes

16 heads, 63 sectors/track, 312076 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

Sector size (logical/physical): 512 bytes / 512bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xc4dfc926

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/vdc1               1      104025   52428568+  83  Linux

/dev/vdc2          104026      208050   52428600   83  Linux

/dev/vdc3          208051      312076   52429104   83  Linux

[root@c02 ~]#

[root@c02 ~]# df -h

Filesystem     Size  Used Avail Use% Mounted on

/dev/vda2       13G  2.2G   11G 18% /

tmpfs          939M     0  939M  0% /dev/shm

[root@c02 ~]#

查看可以看出第二块硬盘未使用,使用第二块硬盘的vdb1分区和vdb2作为osd的硬盘,这里先添加osd0,osd1将在集群部署完毕后再通过手动方式添加。

3.4.2    准备分区文件系统

在c01节点上添加osd节点c02的第一个设备,默认文件系统是xfs,--fs-type参数可以指定使用ext4。此处c02为节点hostname,/dev/vdb1为预先为osd0准备好的分区

[root@c01ceph]# ceph-deployosd prepare c02:/dev/vdb1 --fs-typeext4

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy--overwrite-conf osd prepare c02:/dev/vdb1

[ceph_deploy.osd][DEBUG ] Preparing clusterceph disks c02:/dev/vdb1:

[c02][DEBUG ] connected to host: c02

[c02][DEBUG ] detect platform information fromremote host

[c02][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to c02

[c02][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

[c02][WARNIN] osd keyring does not exist yet,creating one

[c02][DEBUG ] create a keyring file

[c02][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host c02disk /dev/vdb1 journal None activate False

[c02][INFO ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph --/dev/vdb1

[c02][DEBUG ] meta-data=/dev/vdb1              isize=2048   agcount=4, agsize=3276786 blks

[c02][DEBUG ]          =                       sectsz=512  attr=2, projid32bit=0

[c02][DEBUG ] data     =                       bsize=4096   blocks=13107142, imaxpct=25

[c02][DEBUG ]          =                       sunit=0      swidth=0 blks

[c02][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0

[c02][DEBUG ] log      =internal log           bsize=4096   blocks=6399, version=2

[c02][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1

[c02][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[c02][WARNIN] DEBUG:ceph-disk:OSD data device/dev/vdb1 is a partition

[c02][WARNIN] DEBUG:ceph-disk:Creating xfs fson /dev/vdb1

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdb1

[c02][WARNIN] DEBUG:ceph-disk:Mounting/dev/vdb1 on /var/lib/ceph/tmp/mnt.N7P_5Q with options noatime

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -t xfs -o noatime -- /dev/vdb1 /var/lib/ceph/tmp/mnt.N7P_5Q

[c02][WARNIN] DEBUG:ceph-disk:Preparing osddata dir /var/lib/ceph/tmp/mnt.N7P_5Q

[c02][WARNIN] DEBUG:ceph-disk:Unmounting/var/lib/ceph/tmp/mnt.N7P_5Q

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/umount -- /var/lib/ceph/tmp/mnt.N7P_5Q

[c02][WARNIN] INFO:ceph-disk:calling partx onprepared device /dev/vdb1

[c02][WARNIN] INFO:ceph-disk:re-reading knownpartitions will display errors

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/partx -a /dev/vdb1

[c02][WARNIN] last arg is not the whole disk

[c02][WARNIN] call: partx -opts devicewholedisk

[c02][INFO ] checking OSD status...

[c02][INFO ] Running command: ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host c02 is now readyfor osd use.

Unhandled exception in thread started by

Error in sys.excepthook:

 

Original exception was:

[root@c01 ceph]#

3.4.3     激活osd设备

在c01节点上激活osd 0

[root@c01 ceph]# ceph-deploy osd activate c02:/dev/vdb1

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy osdactivate c02:/dev/vdb1

[ceph_deploy.osd][DEBUG ] Activating clusterceph disks c02:/dev/vdb1:

[c02][DEBUG ] connected to host: c02

[c02][DEBUG ] detect platform information fromremote host

[c02][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] activating host c02disk /dev/vdb1

[ceph_deploy.osd][DEBUG ] will use init type:sysvinit

[c02][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit --mount/dev/vdb1

[c02][DEBUG ] === osd.0 ===

[c02][DEBUG ] Starting Ceph osd.0 on c02...

[c02][DEBUG ] starting osd.0 at :/0 osd_data/var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/blkid -p -s TYPE -ovalue -- /dev/vdb1

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[c02][WARNIN] DEBUG:ceph-disk:Mounting/dev/vdb1 on /var/lib/ceph/tmp/mnt.H6LxgI with options noatime

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -t xfs -o noatime -- /dev/vdb1 /var/lib/ceph/tmp/mnt.H6LxgI

[c02][WARNIN] DEBUG:ceph-disk:Cluster uuid is28dc2c77-7e70-4fa5-965b-1d78ed72d18a

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[c02][WARNIN] DEBUG:ceph-disk:Cluster name isceph

[c02][WARNIN] DEBUG:ceph-disk:OSD uuid ise3b8a420-303d-4ce1-94da-1d09bab01fae

[c02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring/var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concisee3b8a420-303d-4ce1-94da-1d09bab01fae

[c02][WARNIN] DEBUG:ceph-disk:OSD id is 0

[c02][WARNIN] DEBUG:ceph-disk:InitializingOSD...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o/var/lib/ceph/tmp/mnt.H6LxgI/activate.monmap

[c02][WARNIN] got monmap epoch 1

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap/var/lib/ceph/tmp/mnt.H6LxgI/activate.monmap --osd-data/var/lib/ceph/tmp/mnt.H6LxgI --osd-journal /var/lib/ceph/tmp/mnt.H6LxgI/journal--osd-uuid e3b8a420-303d-4ce1-94da-1d09bab01fae --keyring/var/lib/ceph/tmp/mnt.H6LxgI/keyring

[c02][WARNIN] 2014-08-13 14:42:27.5555317f93b82d07a0 -1 journal FileJournal::_open: disabling aio for non-blockjournal.  Use journal_force_aio to forceuse of aio anyway

[c02][WARNIN] 2014-08-13 14:42:28.6929577f93b82d07a0 -1 journal FileJournal::_open: disabling aio for non-blockjournal.  Use journal_force_aio to forceuse of aio anyway

[c02][WARNIN] 2014-08-13 14:42:28.693765 7f93b82d07a0-1 filestore(/var/lib/ceph/tmp/mnt.H6LxgI) could not find23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

[c02][WARNIN] 2014-08-13 14:42:29.4935907f93b82d07a0 -1 created object store /var/lib/ceph/tmp/mnt.H6LxgI journal /var/lib/ceph/tmp/mnt.H6LxgI/journalfor osd.0 fsid 28dc2c77-7e70-4fa5-965b-1d78ed72d18a

[c02][WARNIN] 2014-08-13 14:42:29.4936527f93b82d07a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.H6LxgI/keyring:can't open /var/lib/ceph/tmp/mnt.H6LxgI/keyring: (2) No such file or directory

[c02][WARNIN] 2014-08-13 14:42:29.4939127f93b82d07a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.H6LxgI/keyring

[c02][WARNIN] DEBUG:ceph-disk:Marking with initsystem sysvinit

[c02][WARNIN] DEBUG:ceph-disk:Authorizing OSDkey...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring/var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i/var/lib/ceph/tmp/mnt.H6LxgI/keyring osd allow * mon allow profile osd

[c02][WARNIN] added key for osd.0

[c02][WARNIN] DEBUG:ceph-disk:ceph osd.0 datadir is ready at /var/lib/ceph/tmp/mnt.H6LxgI

[c02][WARNIN] DEBUG:ceph-disk:Moving mount tofinal location...

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -o noatime -- /dev/vdb1 /var/lib/ceph/osd/ceph-0

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/umount -l -- /var/lib/ceph/tmp/mnt.H6LxgI

[c02][WARNIN] DEBUG:ceph-disk:Starting cephosd.0...

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/service ceph start osd.0

[c02][WARNIN] create-or-move updating item name'osd.0' weight 0.05 at location {host=c02,root=default} to crush map

[c02][INFO ] checking OSD status...

[c02][INFO ] Running command: ceph --cluster=ceph osd stat --format=json

Unhandled exception in thread started by

Error in sys.excepthook:

 

Original exception was:

[root@c01 ceph]#

3.4.4     修改配置文件

[root@c02 ~]# vim /etc/ceph/ceph.conf

#新增加如下配置

[osd]

osd mkfs type =ext4

osd mount options ext4 ="rw,noatime,user_xattr"

 

[osd.0]

host=c02

devs=/dev/vdb1

 

#[osd.1]

#host=c02

#devs=/dev/vdb2

 

#[osd.2]

#host=c02

#devs=/dev/vdc1

3.4.5     同步配置文件和密钥

复制ceph配置文件及密钥到mon、osd节点

[root@c01 ceph]# ceph-deploy admin c01 c02

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invo        ked(1.5.10): /usr/bin/ceph-deploy admin c01 c02

[ceph_deploy.admin][DEBUG ] Pushing admin keysand conf to c01

[c01][DEBUG ] connected to host: c01

[c01][DEBUG ] detect platform information fromremote host

[c01][DEBUG ] detect machine type

[c01][DEBUG ] get remote short hostname

[c01][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keysand conf to c02

[c02][DEBUG ] connected to host: c02

[c02][DEBUG ] detect platform information fromremote host

[c02][DEBUG ] detect machine type

[c02][DEBUG ] get remote short hostname

[c02][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

确保你的ceph.client.admin.keyring有可读取权限

[root@admin-node ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring

3.5    检查集群运行状态

3.5.1     查看监控节点的状态

 [root@c01ceph]# ceph quorum_status --format json-pretty

 

{ "election_epoch": 2,

 "quorum": [

       0],

 "quorum_names": [

       "c01"],

 "quorum_leader_name": "c01",

 "monmap": { "epoch": 1,

     "fsid": "28dc2c77-7e70-4fa5-965b-1d78ed72d18a",

     "modified": "0.000000",

     "created": "0.000000",

     "mons": [

           { "rank": 0,

             "name": "c01",

             "addr": "192.168.11.111:6789\/0"}]}}

[root@c01 ceph]#

3.5.2     查看集群同步状态

ceph health 

 也可以使用ceph -s命令查看状态

[root@c01 ceph]# ceph health

HEALTH_OK

[root@c01 ceph]#

[root@c01 ceph]# ceph -s

   cluster 80e9a1bb-6575-4089-89df-31c7c8ea97ac

    health HEALTH_OK

    monmap e1: 1 mons at {c01=192.168.11.111:6789/0}, election epoch 2,quorum 0 c01

    osdmap e5: 1 osds: 1 up, 1 in

     pgmap v10: 192 pgs, 3 pools, 0 bytes data, 0 objects

           5302 MB used, 42534 MB / 50396 MB avail

                192 active+clean

[root@c01 ceph]#

如果返回的是HEALTH_OK,则代表成功!

如果遇到如下提示: 

HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds 

说明没有检测到 OSD。

或者遇到如下提示: 

HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)

执行如下命令,就可以解决: 

ceph pg dump_stuck stale && ceph pg dump_stuck inactive && ceph pg dump_stuck unclean

如果遇到如下提示: 

HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%) 

则说明osd数量不够,Ceph默认至少需要提供两个osd,如果配置1个osd, 应该修改/etc/ceph/ceph.conf中osd pool default size = 1 并且设置osd pool defaultmin size = 1。

如果遇到如下提示:

HEALTH_WARN clock skew detected on mon.node2,mon.node3

出现上面信息的意思是,时间不一致,必须把他们的时间同步

解决方法如下:

配置ntp服务器,配置所有节点时间同步。

3.6    添加mds节点

[root@c01 ceph]# ceph-deploy mds create c01

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy mdscreate c01

[ceph_deploy.mds][DEBUG ] Deploying mds,cluster ceph hosts c01:c01

[c01][DEBUG ] connected to host: c01

[c01][DEBUG ] detect platform information fromremote host

[c01][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.mds][DEBUG ] remote host will usesysvinit

[ceph_deploy.mds][DEBUG ] deploying mdsbootstrap to c01

[c01][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

[c01][DEBUG ] create path if it doesn't exist

[c01][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mds--keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.c01osd allow rwx mds allow mon allow profile mds -o/var/lib/ceph/mds/ceph-c01/keyring

[c01][INFO ] Running command: service ceph start mds.c01

[c01][DEBUG ] === mds.c01 ===

[c01][DEBUG ] Starting Ceph mds.c01 on c01...

[c01][DEBUG ] starting mds.c01 at :/0

Unhandled exception in thread started by

Error in sys.excepthook:

 

Original exception was:

[root@c01 ceph]#

3.7    客户端挂载

3.7.1     文件系统挂载

配置源

rpm -ivh http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

安装ceph-fuse

yum install ceph-fuse -y--disablerepo=epel

把秘钥及配置文件拷贝到客户端

[root@c01 ceph]# scp ceph.conf ceph.client.admin.keyring <clientip>:/etc/ceph/

ceph-fuse -m192.168.11.111:6789 /opt/fs

[root@host-10-10-12-101 ceph]# df -Th

Filesystem    Type            Size  Used Avail Use% Mounted on

/dev/vda2     ext4             13G  4.0G 8.3G  33% /

tmpfs         tmpfs           1.9G     0 1.9G   0% /dev/shm

ceph-fuse     fuse.ceph-fuse   50G  7.7G  42G  16% /opt/fs

如果是ubuntu系统或者Linux内核2.6.34以上的系统可以通过一下命令挂载

mount -t ceph 192.168.11.111:6789:/ /opt/fs/

mount -t ceph 192.168.11.111://opt/fs/

root@ubuntu:/opt/fs# df -Th

Filesystem            Type      Size Used Avail Use% Mounted on

/dev/vda5             ext4      8.2G 1.3G  6.6G  16% /

none                  tmpfs     4.0K    0  4.0K   0% /sys/fs/cgroup

udev                  devtmpfs  2.0G 4.0K  2.0G   1% /dev

tmpfs                 tmpfs     396M 364K  395M   1% /run

none                  tmpfs     5.0M    0  5.0M   0% /run/lock

none                  tmpfs     2.0G    0  2.0G   0% /run/shm

none                  tmpfs     100M    0  100M   0% /run/user

192.168.11.111:6789:/ ceph       50G 7.7G   42G  16% /opt/fs

root@ubuntu:/opt# df -Th

Filesystem      Type      Size  Used Avail Use% Mounted on

/dev/vda5       ext4      8.2G  1.3G 6.6G  16% /

none            tmpfs     4.0K     0 4.0K   0% /sys/fs/cgroup

udev            devtmpfs  2.0G  4.0K 2.0G   1% /dev

tmpfs           tmpfs     396M  364K 395M   1% /run

none            tmpfs     5.0M    0  5.0M   0% /run/lock

none            tmpfs     2.0G     0 2.0G   0% /run/shm

none            tmpfs     100M     0 100M   0% /run/user

192.168.11.111:/ ceph       50G 7.7G   42G  16% /opt/fs

3.7.2     块存储挂载

CentOS6.5默认kernel没有rbd模块因此需要升级系统内核才能使用rbd

kernel 2.6.34以前的版本是没有Module rbd的,把系统内核版本升级到最新

rpm --importhttp://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvhhttp://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-lt-y

安装完内核后修改/etc/grub.conf配置文件

修改配置文件中的 Default=1 to Default=0然后机器重启后生效

配置源

rpm -ivhhttp://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

安装ceph-fuse

yuminstall ceph-fuse

注意:ceph版本必须使用的是ceph源中的软件并且请注意版本号是否是0.80,如果下载的过程中提示使用了epel中的ceph包,请务必先将epel源disable。

在客户端上应用ceph块存储

把秘钥及配置文件拷贝到客户端

[root@c01 ceph]# scp ceph.confceph.client.admin.keyring <client ip>:/etc/ceph/

新建一个ceph pool

[root@host-10-10-10-102 fs]# rados mkpool test

successfully created pool test

或者使用一下命令创建pool

[root@host-10-10-10-102 fs]# ceph osd pool create test 256

在pool中新建一个镜像

[root@host-10-10-10-102 fs]# rbd create test-1 --size 4096 -p test

centos6.5的系统默认是没有Modulerbd的,在进行下面的操作时会出现报错:

把镜像映射到pool块设备中

[root@host-10-10-10-102 ~]# rbd map test-1 -p test --name client.admin

ERROR: modinfo: could not find module rbd

FATAL: Module rbd not found.

rbd: modprobe rbd failed! (256)

如果出现上面的提示,则说明内核没有编译rbd模块,解决办法是升级内核版本。

升级后再次执行map即可

[root@host-10-10-10-102 ~]# rbd map test-1 -p test --name client.admin

查看rbd的映射关系

[root@host-10-10-10-102 ~]# rbd showmapped

id pool image snap device   

1  testtest-1 -    /dev/rbd1

[root@host-10-10-10-102 ~]#

至此,可以对rbd设备进行操作并使用。

3.8   追加新osd节点

配置源

rpm -ivh http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

在osd节点安装ceph软件

yum install ceph –y

环境配置

关闭selinux;关闭iptables或允许6789端口、配置ssh互相认证,配置hosts

新加osd节点

准备磁盘分区

[root@c01 ceph]# cd /etc/ceph/

[root@c01 ceph]# ceph-deploy osd prepare c02:/dev/vdb2 --fs-type ext4

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy osdprepare c02:/dev/vdb2 --fs-type ext4

[ceph_deploy.osd][DEBUG ] Preparing cluster cephdisks c02:/dev/vdb2:

[c02][DEBUG ] connected to host: c02

[c02][DEBUG ] detect platform information fromremote host

[c02][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to c02

[c02][DEBUG ] write cluster configuration to/etc/ceph/{cluster}.conf

[c02][INFO ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host c02 disk/dev/vdb2 journal None activate False

[c02][INFO ] Running command: ceph-disk -v prepare --fs-type ext4 --cluster ceph --/dev/vdb2

[c02][DEBUG ] Filesystem label=

[c02][DEBUG ] OS type: Linux

[c02][DEBUG ] Block size=4096 (log=2)

[c02][DEBUG ] Fragment size=4096 (log=2)

[c02][DEBUG ] Stride=0 blocks, Stripe width=0blocks

[c02][DEBUG ] 3276800 inodes, 13107150 blocks

[c02][DEBUG ] 655357 blocks (5.00%) reserved forthe super user

[c02][DEBUG ] First data block=0

[c02][DEBUG ] Maximum filesystem blocks=4294967296

[c02][DEBUG ] 400 block groups

[c02][DEBUG ] 32768 blocks per group, 32768fragments per group

[c02][DEBUG ] 8192 inodes per group

[c02][DEBUG ] Superblock backups stored on blocks:

[c02][DEBUG ]         32768,98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

[c02][DEBUG ]         4096000,7962624, 11239424

[c02][DEBUG ]

[c02][DEBUG ] Writing inode tables: done                            

[c02][DEBUG ] Creating journal (32768 blocks):done

[c02][DEBUG ] Writing superblocks and filesystemaccounting information: done

[c02][DEBUG ]

[c02][DEBUG ] This filesystem will beautomatically checked every 24 mounts or

[c02][DEBUG ] 180 days, whichever comesfirst.  Use tune2fs -c or -i to override.

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_ext4

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_ext4

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_ext4

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_fs_mount_options_ext4

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[c02][WARNIN] DEBUG:ceph-disk:OSD data device/dev/vdb2 is a partition

[c02][WARNIN] DEBUG:ceph-disk:Creating ext4 fs on/dev/vdb2

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/mkfs -t ext4 -- /dev/vdb2

[c02][WARNIN] mke2fs 1.41.12 (17-May-2010)

[c02][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdb2on /var/lib/ceph/tmp/mnt.AV760P with options noatime,user_xattr

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -t ext4 -o noatime,user_xattr -- /dev/vdb2/var/lib/ceph/tmp/mnt.AV760P

[c02][WARNIN] DEBUG:ceph-disk:Preparing osd datadir /var/lib/ceph/tmp/mnt.AV760P

[c02][WARNIN] DEBUG:ceph-disk:Unmounting/var/lib/ceph/tmp/mnt.AV760P

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/umount -- /var/lib/ceph/tmp/mnt.AV760P

[c02][WARNIN] INFO:ceph-disk:calling partx onprepared device /dev/vdb2

[c02][WARNIN] INFO:ceph-disk:re-reading knownpartitions will display errors

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/partx -a /dev/vdb2

[c02][WARNIN] last arg is not the whole disk

[c02][WARNIN] call: partx -opts device wholedisk

[c02][INFO ] checking OSD status...

[c02][INFO ] Running command: ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host c02 is now readyfor osd use.

Unhandled exception in thread started by

Error in sys.excepthook:

 

Original exception was:

激活磁盘分区

[root@c01 ceph]# ceph-deploy osd activate c02:/dev/vdb2

[ceph_deploy.conf][DEBUG ] found configurationfile at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy osdactivate c02:/dev/vdb2

[ceph_deploy.osd][DEBUG ] Activating cluster cephdisks c02:/dev/vdb2:

[c02][DEBUG ] connected to host: c02

[c02][DEBUG ] detect platform information fromremote host

[c02][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] activating host c02 disk/dev/vdb2

[ceph_deploy.osd][DEBUG ] will use init type:sysvinit

[c02][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit --mount/dev/vdb2

[c02][DEBUG ] === osd.1 ===

[c02][DEBUG ] Starting Ceph osd.1 on c02...

[c02][DEBUG ] starting osd.1 at :/0 osd_data/var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/blkid -p -s TYPE -ovalue -- /dev/vdb2

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_ext4

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_fs_mount_options_ext4

[c02][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdb2on /var/lib/ceph/tmp/mnt.z797ay with options noatime,user_xattr

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -t ext4 -o noatime,user_xattr -- /dev/vdb2/var/lib/ceph/tmp/mnt.z797ay

[c02][WARNIN] DEBUG:ceph-disk:Cluster uuid is80e9a1bb-6575-4089-89df-31c7c8ea97ac

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[c02][WARNIN] DEBUG:ceph-disk:Cluster name is ceph

[c02][WARNIN] DEBUG:ceph-disk:OSD uuid ise145a5b0-72bb-4dfd-82fe-cf012992eaf4

[c02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring/var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concisee145a5b0-72bb-4dfd-82fe-cf012992eaf4

[c02][WARNIN] DEBUG:ceph-disk:OSD id is 1

[c02][WARNIN] DEBUG:ceph-disk:Initializing OSD...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyringmon getmap -o /var/lib/ceph/tmp/mnt.z797ay/activate.monmap

[c02][WARNIN] got monmap epoch 1

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.z797ay/activate.monmap--osd-data /var/lib/ceph/tmp/mnt.z797ay --osd-journal/var/lib/ceph/tmp/mnt.z797ay/journal --osd-uuide145a5b0-72bb-4dfd-82fe-cf012992eaf4 --keyring/var/lib/ceph/tmp/mnt.z797ay/keyring

[c02][WARNIN] 2014-08-18 10:35:46.326872 7f17b9dd37a0-1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aioanyway

[c02][WARNIN] 2014-08-18 10:35:48.0707897f17b9dd37a0 -1 journal FileJournal::_open: disabling aio for non-blockjournal.  Use journal_force_aio to forceuse of aio anyway

[c02][WARNIN] 2014-08-18 10:35:48.0714957f17b9dd37a0 -1 filestore(/var/lib/ceph/tmp/mnt.z797ay) could not find23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

[c02][WARNIN] 2014-08-18 10:35:48.7205657f17b9dd37a0 -1 created object store /var/lib/ceph/tmp/mnt.z797ay journal/var/lib/ceph/tmp/mnt.z797ay/journal for osd.1 fsid80e9a1bb-6575-4089-89df-31c7c8ea97ac

[c02][WARNIN] 2014-08-18 10:35:48.7206177f17b9dd37a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.z797ay/keyring:can't open /var/lib/ceph/tmp/mnt.z797ay/keyring: (2) No such file or directory

[c02][WARNIN] 2014-08-18 10:35:48.7207037f17b9dd37a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.z797ay/keyring

[c02][WARNIN] DEBUG:ceph-disk:Marking with initsystem sysvinit

[c02][WARNIN] DEBUG:ceph-disk:Authorizing OSDkey...

[c02][WARNIN] INFO:ceph-disk:Running command:/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring/var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i/var/lib/ceph/tmp/mnt.z797ay/keyring osd allow * mon allow profile osd

[c02][WARNIN] added key for osd.1

[c02][WARNIN] DEBUG:ceph-disk:ceph osd.1 data diris ready at /var/lib/ceph/tmp/mnt.z797ay

[c02][WARNIN] DEBUG:ceph-disk:Moving mount tofinal location...

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/mount -o noatime,user_xattr -- /dev/vdb2 /var/lib/ceph/osd/ceph-1

[c02][WARNIN] INFO:ceph-disk:Running command:/bin/umount -l -- /var/lib/ceph/tmp/mnt.z797ay

[c02][WARNIN] DEBUG:ceph-disk:Starting cephosd.1...

[c02][WARNIN] INFO:ceph-disk:Running command:/sbin/service ceph start osd.1

[c02][WARNIN] create-or-move updating item name'osd.1' weight 0.05 at location {host=c02,root=default} to crush map

[c02][INFO ] checking OSD status...

[c02][INFO ] Running command: ceph --cluster=ceph osd stat --format=json

[root@c01 ceph]#

使用ceph -w检查新增的osd是否同步,此处显示HEALTH_OK并且已经有2个osd

[root@c01 ceph]# ceph -w

    cluster80e9a1bb-6575-4089-89df-31c7c8ea97ac

     health HEALTH_OK

     monmape1: 1 mons at {c01=192.168.11.111:6789/0}, election epoch 1, quorum 0 c01

     mdsmape9: 1/1/1 up {0=c01=up:active}

     osdmape16: 2 osds: 2 up, 2 in

      pgmapv434: 200 pgs, 4 pools, 44557 bytes data, 23 objects

           10604 MB used, 85067 MB / 100792 MB avail

                200 active+clean

 

2014-08-18 10:38:01.691282 mon.0 [INF] pgmap v435:200 pgs: 200 active+clean; 44557 bytes data, 10604 MB used, 85068 MB / 100792MB avail

3.9   追加新mon节点

Ceph监视器是轻量级进程,维护着集群映射的主要副本。一个集群可以只有一个监视器。在生产环境集群里我们建议至少3个监视器。Ceph监视器使用PAXOS来建立集群主要映射的一致性,这需要监视器中大部分建立集群映射的一致性 (比如., 1; 5个中的3个; 6个当中的4个; 等.). 建议至少要三个,一个不好容灾,奇数个可确保PAXOS算法能确定一批监视器里哪个版本的集群运行图是最新的)

由于监视器是轻量级的,可能运行监视器的主机上也运行着OSD;但是,我们建议他们运行在不同的主机上。

环境配置

关闭selinux;关闭iptables或允许6789端口、配置ssh互相认证,配置hosts

重要:为了建立一致性,大多数监视器必须能互相连通。

我们将在c02上新增加mon节点,由于c02先前已经安装过ceph,无需再安装,如果是全新节点,则预先需要安装ceph,可以使用ceph-deploy install <hostname>命令安装。

编辑ceph.conf新增加mon_host 和mon_initial_members 参数中的mon节点的IP和hostname。

[root@c01 ~]# cd /etc/ceph/

[root@c01 ceph]# ceph-deploy installc03

[root@c01 ceph]# vim/etc/ceph/ceph.conf

mon_host =192.168.11.111,192.168.12.112,192.168.11.112

mon_initial_members = c01,c02, c03

ceph-deploy mon create-initial

ceph-deploy mon create c02 c03

4     Ceph在openstack中的应用

4.1    共享存储

可以为nova提供共享文件系统,在nova-compute节点挂载共享目录,实现计算节点基于共享存储的热迁。

service nova-compute stop

ceph-fuse -m 192.168.11.111:6789 /var/lib/nova/instances/

chown -R nova /var/lib/nova/instances

service nova-compute start

同样可以为glance提供共享文件系统,存放image镜像

/etc/init.d/openstack-glance-api stop

/etc/init.d/openstack-glance-registry stop

ceph-fuse -m 192.168.11.111:6789 /var/lib/glance/images

4.2    RBD存储

注:Kernel升级不是必须的,如果需要通过挂载rbd设备使用ceph则必须升级内核;目前nova提供了另一种方式是通过libvirt使用rbd,只需qemu支持rbd即可。

CentOS6.5默认kernel没有rbd模块因此需要升级系统内核才能使用rbd

kernel 2.6.34以前的版本是没有Module rbd的,把系统内核版本升级到最新

rpm --importhttp://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvhhttp://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-lt-y

安装完内核后修改/etc/grub.conf配置文件

修改配置文件中的 Default=1 to Default=0然后机器重启后生效

4.2.1     安装ceph客户端工具

配置源

rpm -ivhhttp://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

安装ceph-fuse

yum install ceph-fuse -y --disablerepo=epel

把秘钥及配置文件拷贝到客户端

[root@c01 ceph]# scp ceph.confceph.client.admin.keyring <client ip>:/etc/ceph/

4.2.2     创建Ceph池

在cinder/glance 节点创建pools

  ceph osd pool create volumes 128

  ceph osd pool create images 128

  ceph osd pool create backups 128

[root@host-10-10-10-101 ceph]# ceph osd poolcreate volumes 128

pool 'volumes' created

[root@host-10-10-10-101 ceph]# ceph osd poolcreate images 128

pool 'images' created

[root@host-10-10-10-101 ceph]# ceph osd poolcreate backups 128

pool 'backups' created

4.2.3     配置Nova使用Ceph

Nova是通过qemu使用ceph的Rbd,因此需要qemu支持rbd格式,CentOS6.5默认qemu版本为0.12.1默认不支持rbd,需要升级qemu至0.15以上或编译带rbd的qemu。

【问题: qemu版本问题,必须能支持rbd格式的,因为libvirt就是通过qemu相关命令与ceph存储交互,可通过"qemu-img–help”察看。

qemu-img version 0.12.1,

Supported formats: raw cow qcow vdi vmdk cloopdmg bochs vpc vvfat qcow2 qedvhdx parallels nbd blkdebug host_cdrom host_floppyhost_device filegluster gluster gluster gluster

可以看到0.12.1不支持rbd,要装0.15以上的】

为了集成nova,先做如下给libvirt创建密钥的操作,这个密钥在qemu执行创建image命令时会使用到,应在nova-compute服务运行的节点上执行如下操作。

安装包含rbd的qemu包

http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64.rpm

http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64.rpm

http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64.rpm

替换qemu版本后可以看到qemu支持rbd设备

Supported formats: raw cow qcow vdi vmdk cloopdmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdromhost_floppy host_device file gluster gluster gluster gluster rbd

为libvirt创建secret密钥

[root@c01 ~]# cat > secret.xml <<EOF 

<secret ephemeral = 'no' private ='no'> 

<usage type = 'ceph'> 

<name>client.adminsecret</name> 

</usage> 

</secret> 

EOF 

[root@c01 ~]# sudo virsh secret-define --filesecret.xml 

Secret 28ca7db4-a1f3-e007-2db6-4a06252c8e0acreated 

设置libvirt secret密钥

[root@c01 ~]# cat/etc/ceph/ceph.client.admin.keyring

[client.admin]

         key= AQDiZuxTeKUjNxAA8gaEVK/GYqXg94lD+LBpUg==

[root@c01 ~]# sudo virsh secret-set-value--secret 28ca7db4-a1f3-e007-2db6-4a06252c8e0a --base64AQDiZuxTeKUjNxAA8gaEVK/GYqXg94lD+LBpUg==

Secret value set

修改nova配置

vim /etc/nova/nova.conf

libvirt_images_type=rbd

libvirt_images_rbd_pool=volumes

libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_secret_uuid=28ca7db4-a1f3-e007-2db6-4a06252c8e0a

rbd_user=admin

重启openstack-nova-compute服务

/etc/init.d/openstack-nova-compute restart

4.2.4     配置Cinder使用Ceph

4.2.4.1   配置Cinder Volume使用ceph

修改cinder配置

vim /etc/cinder/cinder.conf

volume_driver=cinder.volume.drivers.rbd.RBDDriver

rbd_user=admin

# rbd_secret_uuid=<None>

rbd_secret_uuid=28ca7db4-a1f3-e007-2db6-4a06252c8e0a

rbd_pool=volumes

rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot=false

rbd_max_clone_depth=5

重启openstack-cinder-volume服务

/etc/init.d/openstack-cinder-volume restart

4.2.4.2   配置Cinder Backup使用ceph

修改cinder配置

vim /etc/cinder/cinder.conf

backup_driver=cinder.backup.drivers.ceph

backup_ceph_user=admin

backup_ceph_pool=backups

backup_ceph_chunk_size=134217728

backup_ceph_stripe_unit=0

backup_ceph_stripe_count=0

restore_discard_excess_bytes=true

重启openstack-cinder-backup服务

/etc/init.d/openstack-cinder-backup restart

4.2.5     配置Glance使用Ceph

修改Glance节点glance api配置

vim /etc/glance/glance-api.conf

1、 default_store=file 改成default_store=rbd

default_store=rbd

2、修改rados认证用户

rbd_store_user = glance

3、修改rados pool

rbd_store_pool = images

4、修改show_image_direct_url参数为True

show_image_direct_url = True

5、重启openstack-glanc-api服务

/etc/init.d/openstack-glance-api restart

参考过程http://docs.openfans.org/ceph/ceph4e2d658765876863/ceph-1/copy_of_ceph-block-device3010ceph57578bbe59073011/openstack301057578bbe59077684openstack3011

猜你喜欢

转载自blog.csdn.net/mrz001/article/details/38898217