Ceph: Difference between revisions

From Halfface
Jump to navigation Jump to search
Line 4: Line 4:
  rados          Reliable Autonomic Distributed Object Store
  rados          Reliable Autonomic Distributed Object Store
  rbd            Rados Block Device
  rbd            Rados Block Device
pg            Placement Groups


=look at logs of rook=
=look at logs of rook=

Revision as of 12:56, 16 August 2023

what does it mean?

crush          Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing)
osd            object storage daemon
rados          Reliable Autonomic Distributed Object Store
rbd            Rados Block Device
pg             Placement Groups

look at logs of rook

Get pod

oc get pods -n openshift-storage -o name -l app=rook-ceph-operator

Look at logs from pod.

oc logs -n openshift-storage -o name -l app=rook-ceph-operator

versions

rook version
ceph -v

Status of ceph

ceph -s

Disk usage

ceph df

Check placement group stats

ceph pg dump

every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.

ceph osd tree

Create or remove OSDs

ceph osd create
ceph osd rm

Create or delete a storage pool

ceph osd pool create
ceph osd pool delete

Repair an OSD

ceph osd repair

Benchmark an OSD

ceph tell osd.* bench

List cluster keys

ceph auth list

ceph attributes about pool

osd pool ls detail

Info about crush

osd crush rule dump

ceph versions

ceph versions

how is data placed

ceph pg dump osds

Get du from biggest block devices

POOL=$(rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | tail -1 | awk '{print $1}') ; for i in $(rbd -p $POOL ls | xargs) ; do echo $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done

access ceph in odf

Print what to paste in shell in ceph pod

echo "Paste in pod: export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'" ; oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator)

list osd pools

ceph osd pool ls
ceph osd pool autoscale-status
ceph config dump
# disable autoscaling
ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done
# Look to see how much data is being used for pg:s.
# Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item
for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done
ceph df