Ceph: Difference between revisions

From Halfface
Jump to navigation Jump to search
Line 6: Line 6:


=ceph=
=ceph=
Status of ceph
==Status of ceph=
  ceph -s
  ceph -s
Disk usage
=Disk usage=
  ceph df
  ceph df
Check placement group stats
=Check placement group stats=
  ceph pg dump
  ceph pg dump
every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.
=every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.=
  ceph osd tree
  ceph osd tree
Create or remove OSDs
=Create or remove OSDs=
  ceph osd create
  ceph osd create
  ceph osd rm
  ceph osd rm
Create or delete a storage pool
=Create or delete a storage pool=
  ceph osd pool create
  ceph osd pool create
  ceph osd pool delete
  ceph osd pool delete
Repair an OSD
=Repair an OSD=
  ceph osd repair
  ceph osd repair
Benchmark an OSD
=Benchmark an OSD=
  ceph tell osd.* bench
  ceph tell osd.* bench
List cluster keys
=List cluster keys=
  ceph auth list
  ceph auth list
ceph attributes about pool
=ceph attributes about pool=
  osd pool ls detail
  osd pool ls detail
Info about crush
=Info about crush=
  osd crush rule dump
  osd crush rule dump
ceph versions
=ceph versions=
  ceph versions
  ceph versions
=how is data placed=
ceph pg dump osds


=Get du from block devices in pool named *rbd*=
=Get du from block devices in pool named *rbd*=

Revision as of 19:21, 2 May 2023

what does it mean?

crush          Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing)
osd            object storage daemon
rados          Reliable Autonomic Distributed Object Store
rbd            Rados Block Device

ceph

=Status of ceph

ceph -s

Disk usage

ceph df

Check placement group stats

ceph pg dump

every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.

ceph osd tree

Create or remove OSDs

ceph osd create
ceph osd rm

Create or delete a storage pool

ceph osd pool create
ceph osd pool delete

Repair an OSD

ceph osd repair

Benchmark an OSD

ceph tell osd.* bench

List cluster keys

ceph auth list

ceph attributes about pool

osd pool ls detail

Info about crush

osd crush rule dump

ceph versions

ceph versions

how is data placed

ceph pg dump osds

Get du from block devices in pool named *rbd*

POOL=$(ceph osd pool ls | grep rbd) ; for i in $(rbd -p $POOL ls | xargs) ; do echo $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done

access ceph in odf

oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator)
export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'

list osd pools

ceph osd pool ls
ceph osd pool autoscale-status
ceph config dump
# disable autoscaling
ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done
# Look to see how much data is being used for pg:s.
# Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item
for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done
ceph df