Ceph: Difference between revisions
Jump to navigation
Jump to search
(→ceph) |
|||
Line 1: | Line 1: | ||
=what does it mean?= | =what does it mean?= | ||
crush Controlled Replication Under Scalable Hashing | |||
osd object storage daemon | osd object storage daemon | ||
rados Reliable Autonomic Distributed Object Store | rados Reliable Autonomic Distributed Object Store |
Revision as of 18:10, 2 May 2023
what does it mean?
crush Controlled Replication Under Scalable Hashing osd object storage daemon rados Reliable Autonomic Distributed Object Store rbd Rados Block Device
ceph
Status of ceph
ceph -s
Disk usage
ceph df
Check placement group stats
ceph pg dump
every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.
ceph osd tree
Create or remove OSDs
ceph osd create ceph osd rm
Create or delete a storage pool
ceph osd pool create ceph osd pool delete
Repair an OSD
ceph osd repair
Benchmark an OSD
ceph tell osd.* bench
List cluster keys
ceph auth list
Get du from block devices in pool named *rbd*
POOL=$(ceph osd pool ls | grep rbd) ; for i in $(rbd -p $POOL ls | xargs) ; do echo $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done
access ceph in odf
oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'
list osd pools
ceph osd pool ls ceph osd pool autoscale-status ceph config dump # disable autoscaling ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done # Look to see how much data is being used for pg:s. # Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done ceph df