Ceph 命令介绍

发布时间: 更新时间: 总字数:1264 阅读时间:3m 作者: IP上海 分享 网址

记录 ceph 常用命令;ceph 快照管理方法;qemu 使用 ceph rbd 块设备方法;rbd 对接 openstack cinder 和 glance 方法

进程相关命令

查看集群的状态

ceph health (detail)或 ceph -w

ceph 进程操作

启动/停止/重启 ceph 进程(当前节点)

sudo /etc/init.d/ceph start/stop/restart

启动/停止/重启 ceph 进程(所有节点)

sudo /etc/init.d/ceph -a start/stop/restart

ceph 守护进程类型:mon osd mds

通过守护进程类型重启服务

语法:

sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID]

示例:

启动/停止 指定类型的 ceph 守护进程(当前节点)

sudo /etc/init.d/ceph start/stop mon/osd/mds

启动/停止 指定类型的 ceph 守护进程(所有节点)

sudo /etc/init.d/ceph -a start/stop mon/osd/mds

或采用:

sudo service ceph [options] [start|restart] [daemonType|daemonID]

启动或停止一个守护进程

语法:

sudo /etc/init.d/ceph start/stop {daemon-type}.{instance}

sudo /etc/init.d/ceph -a start/stop {daemon-type}.{instance}

sudo service ceph start {daemon-type}.{instance}

示例:

当前节点

sudo /etc/init.d/ceph start osd.0

所有节点

sudo /etc/init.d/ceph -a start osd.0

monitor 相关命令

监控包括:osd,mon,pg,mds

交互模式命令如下:

# ceph
ceph> health
ceph> status
ceph> quorum_status
ceph> mon_status
ceph quorum_status --format json-pretty

监视集群状态

ceph -w

检测集群资源使用状态

ceph df
ceph status
ceph -s

检测 osd 的状态命令

ceph osd stat
ceph osd dump
ceph osd tree
ceph osd df

检测 mon 的状态命令

ceph mon stat
ceph mon dump

查看 mon 选举状态

ceph quorum_status

检测 mds 的状态命令

ceph mds stat
ceph mds dump

注释:metadta services 有两个状态集 up|down active|inactive

rbd 相关命令

准备

ceph osd pool create xiexianbin 16
ceph osd pool set rbd pgp_num 128 pg_num 128
  • 创建 block device image
rbd create bar --size 10240 --pool xiexianbin
  • list block device image
rbd ls {poolname}
rbd ls xiexianbin
  • 查看指定 image 的详细信息
rbd --image {image-name} -p {pool-name} info
rbd --image bar -p xiexianbin info
  • resize a block device image
rbd resize --image bar -p xiexianbin --size 20480
  • removing a block device image
rbd rm {image-name} -p {pool-name}
rbd rm bar -p xiexianbin

内核模块操作

准备:

rbd create image-block --size 10240  --pool xiexianbin
  • 列出指定 pool 中的所有 images
rbd ls/list {poolname}
  • 映射块设备
sudo rbd map {image-name} --pool {pool-name} --id {user-name} --keyring {keyring_path}
sudo rbd map --pool xiexianbin image-block

存在问题:若块设备名称为 image-block-1 将会失败。命令如下:

sudo rbd map image-block-1 --pool xiexianbin(错误)
  • 查看块设备映射
rbd showmapped
  • 解除块设备映射
sudo rbd unmap /dev/rbd/{poolname}/{imagename}
sudo rbd unmap /dev/rbd1
  • mount 命令
sudo mkfs.ext4 /dev/rbd1
mkdir /home/ceph/xxb/rbd1
sudo mount /dev/rbd1 /home/ceph/xxb/rbd1

快照

注意:为 image 创建快照是,image 应处于一个稳定的状态(即没有数据操作)。

创建

rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
rbd snap create {pool-name}/{image-name}@{snap-name}

例如:

rbd --pool xiexianbin snap create --snap bak1 image-block
rbd snap create xiexianbin/image-block@bak2

查询

rbd --pool {pool-name} snap ls {image-name}
rbd snap ls {pool-name}/{image-name}
rbd --pool xiexianbin snap ls/list image-block
rbd snap ls xiexianbin/image-block

回滚

rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
rbd snap rollback {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rollback --snap bak1 image-block
rbd snap rollback xiexianbin/image-block@bak2

删除

rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
rbd snap rm {pool-name}/{image-name}@{snap-name}
rbd --pool xiexianbin snap rm --snap bak2 image-block
rbd snap rm xiexianbin/image-block@bak1

清空快照

rbd --pool {pool-name} snap purge {image-name}
rbd snap purge {pool-name}/{image-name}
rbd --pool xiexianbin snap purge image-block
rbd snap purge xiexianbin/image-block

qemu

常用命令

qemu-img {command} [options] rbd:glance-pool/maipo:id=glance:conf=/etc/ceph/ceph.conf

qemu-img create -f raw rbd:{pool-name}/{image-name} {size}

qemu-img create -f raw rbd:data/foo 10G

qemu-img resize rbd:{pool-name}/{image-name} {size}

qemu-img resize rbd:data/foo 10G

qemu-img info rbd:{pool-name}/{image-name}

qemu-img info rbd:data/foo

RUNNING QEMU WITH RBD

You can use qemu-img to convert existing virtual machine images to Ceph block device images. For example, if you have a qcow2 image, you could run:

qemu-img convert -f qcow2 -O raw debian_squeeze.qcow2 rbd:data/squeeze

To run a virtual machine booting from that image, you could run:

qemu -m 1024 -drive format=raw,file=rbd:data/squeeze

RBD caching can significantly improve performance. Since QEMU 1.2, QEMU’s cache options control librbd caching:

qemu -m 1024 -drive format=rbd,file=rbd:data/squeeze,cache=writeback

libvirt

configure ceph

http://docs.ceph.com/docs/v0.94.4/rbd/libvirt/

  • Create a pool
ceph osd pool create libvirt-pool 64 64
  • Create a Ceph User
ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
  • Use QEMU to create an image in your RBD pool
qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G

rbd 对接 openstack

创建池

ceph osd pool create volumes 32 32
ceph osd pool create images 16 16
ceph osd pool create backups 32 32
ceph osd pool create vms 32 32

配置 OpenStack ceph 客户端

http://docs.ceph.com/docs/v0.94.4/rbd/rbd-openstack/

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
  • 对于 glance-api 端,安装 rbd
sudo yum install python-rbd

对于 nova-compute、cinder-backups 和 cinder-volume 端安装 ceph

sudo yum install ceph
  • 创建用户
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
  • 将 keyring 同步到指定机器
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
  • 获取 key
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
  • 配置 secret
# uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

其他

查看集群的详细配置

ceph daemon mon.compute-192-168-2-202 config show | more

查看 ceph log 日志所在的目录

ceph-conf --name mon.compute-192-168-2-202 --show-config-value log_file

参照:

http://docs.ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit

Home Archives Categories Tags Statistics
本文总阅读量 次 本站总访问量 次 本站总访客数