Ceph
Revision as of 12:07, 28 August 2023 by Ekaanbj (talk | contribs) (→List pools with additional information)
what does it mean?
crush Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing) osd object storage daemon osd Object Storage Device rados Reliable Autonomic Distributed Object Store rgw Rados Gateway, Ceph Object Gateway is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. rbd Rados Block Device pg Placement Groups
look at logs of rook
Get pod
oc get pods -n openshift-storage -o name -l app=rook-ceph-operator
Look at logs from pod.
oc logs -n openshift-storage -o name -l app=rook-ceph-operator
versions
rook version ceph -v
Status of ceph
ceph -s ceph health detail:
Disk usage
ceph df
Check placement group stats
ceph pg dump
every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.
ceph osd tree
Create or remove OSDs
ceph osd create ceph osd rm osd.2
Create or delete a storage pool
ceph osd pool create ceph osd pool delete
Repair an OSD
ceph osd repair
Benchmark an OSD
ceph tell osd.* bench
List cluster keys
ceph auth list
ceph attributes about pool
osd pool ls detail
Info about crush
osd crush rule dump
ceph versions
ceph versions
how is data placed
ceph pg dump osds
Get du from biggest block devices
POOL=$(rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | tail -1 | awk '{print $1}') ; for i in $(rbd -p $POOL ls | xargs) ; do echo $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done
access ceph in odf
Print what to paste in shell in ceph pod
echo "Paste in pod: export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'" ; oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator)
list osd pools
ceph osd pool ls ceph osd pool autoscale-status ceph config dump # disable autoscaling ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done # Look to see how much data is being used for pg:s. # Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done ceph df
Manage crashes
Look at ongoing crashes
ceph health detail
Wipe old crashes if new are not being generated.
ceph crash archive-all
List pools with additional information
ceph osd pool ls detail
ceph auth ls
List users
ceph auth del osd.2
Delete auth