Ceph: Difference between revisions
Jump to navigation
Jump to search
(→ceph) |
(→ceph) |
||
Line 18: | Line 18: | ||
Create or delete a storage pool | Create or delete a storage pool | ||
ceph osd pool create | ceph osd pool create | ||
ceph osd pool delete | |||
Repair an OSD | |||
ceph osd repair | |||
=Get du from block devices in pool named *rbd*= | =Get du from block devices in pool named *rbd*= |
Revision as of 16:50, 30 April 2023
what does it mean?
osd object storage daemon rados Reliable Autonomic Distributed Object Store rbd Rados Block Device
ceph
Status of ceph
ceph -s
Disk usage
ceph df
Check placement group stats
ceph pg dump
View the CRUSH map
ceph osd tree
Create or remove OSDs
ceph osd create ceph osd rm
Create or delete a storage pool
ceph osd pool create ceph osd pool delete
Repair an OSD
ceph osd repair
Get du from block devices in pool named *rbd*
POOL=$(ceph osd pool ls | grep rbd) ; for i in $(rbd -p $POOL ls | xargs) ; do echo $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done
access ceph in odf
oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'
list osd pools
ceph osd pool ls ceph osd pool autoscale-status ceph config dump # disable autoscaling ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done # Look to see how much data is being used for pg:s. # Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done ceph df