Ceph: Difference between revisions
Jump to navigation
Jump to search
(→ceph) |
(→rbd) |
||
(27 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
crush Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing) | crush Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing) | ||
osd object storage daemon | osd object storage daemon | ||
osd Object Storage Device | |||
rados Reliable Autonomic Distributed Object Store | rados Reliable Autonomic Distributed Object Store | ||
rgw Rados Gateway, Ceph Object Gateway is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. | |||
rbd Rados Block Device | rbd Rados Block Device | ||
pg Placement Groups | |||
mds metadata server daemon manage the file system namespace, coordinating access to the shared OSD cluster. | |||
=ceph= | =look at logs of rook= | ||
Status of ceph | Get pod | ||
oc get pods -n openshift-storage -o name -l app=rook-ceph-operator | |||
Look at logs from pod. | |||
oc logs -n openshift-storage -o name -l app=rook-ceph-operator | |||
=look at ceph cr= | |||
oc api-resources | grep ceph | awk '{print $1}' | while read i ; do echo '*' $i ; oc get $i -A ; done | |||
=versions= | |||
rook version | |||
ceph -v | |||
=Status of ceph= | |||
ceph -s | ceph -s | ||
Disk usage | ceph health detail: | ||
=Disk usage= | |||
ceph df | ceph df | ||
Check placement group stats | =Check placement group stats= | ||
ceph pg dump | ceph pg dump | ||
every OSD and also the class, weight, status, which node it’s in, and any reweight or priority. | =every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.= | ||
ceph osd tree | ceph osd tree | ||
Create or remove OSDs | =Create or remove OSDs= | ||
ceph osd create | ceph osd create | ||
ceph osd rm | ceph osd rm osd.2 | ||
Create or delete a storage pool | |||
=Create or delete a storage pool= | |||
ceph osd pool create | ceph osd pool create | ||
ceph osd pool delete | ceph osd pool delete | ||
Repair an OSD | =Repair an OSD= | ||
ceph osd repair | ceph osd repair | ||
Benchmark an OSD | =Benchmark an OSD= | ||
ceph tell osd.* bench | ceph tell osd.* bench | ||
List cluster keys | =List cluster keys= | ||
ceph auth list | ceph auth list | ||
ceph attributes about pool | =ceph attributes about pool= | ||
osd pool ls detail | osd pool ls detail | ||
Info about crush | =Info about crush= | ||
osd crush rule dump | osd crush rule dump | ||
ceph versions | =ceph versions= | ||
ceph versions | ceph versions | ||
=how is data placed= | |||
ceph pg dump osds | |||
=Get du from biggest block devices= | |||
POOL=$(rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | tail -1 | awk '{print $1}') ; for i in $(rbd -p $POOL ls | xargs) ; do echo '*' $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done | |||
=access ceph in odf= | =access ceph in odf= | ||
oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) | Print what to paste in shell in ceph pod | ||
echo "Paste in pod: export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'" ; oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) | |||
Status of ceph | |||
oc exec -it -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) -- bash -c 'CEPH_ARGS="-c /var/lib/rook/openshift-storage/openshift-storage.config" ceph -s' | |||
=list osd pools= | =list osd pools= | ||
Line 49: | Line 72: | ||
for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done | for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done | ||
ceph df | ceph df | ||
=Manage crashes= | |||
Look at ongoing crashes | |||
ceph health detail | |||
Wipe old crashes if new are not being generated. | |||
ceph crash archive-all | |||
=List pools with additional information= | |||
ceph osd pool ls detail | |||
=ceph auth ls= | |||
List users | |||
=ceph auth del osd.2= | |||
Delete auth | |||
=provisioner= | |||
openshift-storage.cephfs.csi.ceph.com Ceph File System (POSIX Compliant filesystem) Provisions a volume for ReadWriteMany (RWX) or ReadWriteOnce (RWO) access modes using the Ceph Filesytem configured in a Ceph cluster. | |||
openshift-storage.rbd.csi.ceph.com Ceph RBD (Block Device) Provisions a volume for RWO access mode for Ceph RBD, RWO and RWX access mode for block PVC, and RWO access mode for Filesystem PVC. | |||
openshift-storage.noobaa.io/obc S3 Bucket (MCG Object Bucket Claim) Provisions an object bucket claim to support S3 API calls through the Multicloud Object Gateway (MCG). The exact storage backing the S3 bucket is dependent on the MCG configuration and the type of deployment. | |||
=cephblockpool= | |||
oc get cephblockpools | |||
=mds(metadata)= | |||
View metadata information | |||
ceph mds metadata | |||
=pv and rbd= | |||
oc get pv -o 'custom-columns=NAME:.spec.claimRef.name,PVNAME:.metadata.name,STORAGECLASS:.spec.storageClassName,VOLUMEHANDLE:.spec.csi.volumeHandle' | |||
=rbd= | |||
List rador block devices. | |||
rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | awk '{print $1}' | while read i ; do echo '*' rbd ls $i ; rbd ls $i ; done | |||
=clean bluestore= | |||
# Enable rook-ceph toolbox | |||
oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]' | |||
# Enter toolbox. | |||
oc -n openshift-storage rsh $(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) | |||
ceph config set osd bluestore_fsck_quick_fix_on_mount true | |||
# Delete pods. One at a time. | |||
oc get pods -n openshift-storage | grep osd | |||
oc delete pod -n openshift-storage -l osd=0 | |||
# root-ceph-toolbox remove | |||
oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "remove", "path": "/spec/enableCephTools"}] ' |
Latest revision as of 08:12, 30 August 2024
what does it mean?
crush Ceph’s placement algorithm (Controlled Replication Under Scalable Hashing) osd object storage daemon osd Object Storage Device rados Reliable Autonomic Distributed Object Store rgw Rados Gateway, Ceph Object Gateway is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. rbd Rados Block Device pg Placement Groups mds metadata server daemon manage the file system namespace, coordinating access to the shared OSD cluster.
look at logs of rook
Get pod
oc get pods -n openshift-storage -o name -l app=rook-ceph-operator
Look at logs from pod.
oc logs -n openshift-storage -o name -l app=rook-ceph-operator
look at ceph cr
oc api-resources | grep ceph | awk '{print $1}' | while read i ; do echo '*' $i ; oc get $i -A ; done
versions
rook version ceph -v
Status of ceph
ceph -s ceph health detail:
Disk usage
ceph df
Check placement group stats
ceph pg dump
every OSD and also the class, weight, status, which node it’s in, and any reweight or priority.
ceph osd tree
Create or remove OSDs
ceph osd create ceph osd rm osd.2
Create or delete a storage pool
ceph osd pool create ceph osd pool delete
Repair an OSD
ceph osd repair
Benchmark an OSD
ceph tell osd.* bench
List cluster keys
ceph auth list
ceph attributes about pool
osd pool ls detail
Info about crush
osd crush rule dump
ceph versions
ceph versions
how is data placed
ceph pg dump osds
Get du from biggest block devices
POOL=$(rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | tail -1 | awk '{print $1}') ; for i in $(rbd -p $POOL ls | xargs) ; do echo '*' $POOL $i ; rbd -p $POOL du $i 2>/dev/null ; done
access ceph in odf
Print what to paste in shell in ceph pod
echo "Paste in pod: export CEPH_ARGS='-c /var/lib/rook/openshift-storage/openshift-storage.config'" ; oc rsh -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator)
Status of ceph
oc exec -it -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-operator) -- bash -c 'CEPH_ARGS="-c /var/lib/rook/openshift-storage/openshift-storage.config" ceph -s'
list osd pools
ceph osd pool ls ceph osd pool autoscale-status ceph config dump # disable autoscaling ceph osd pool ls | while read i ; do echo '*' $i ; ceph osd pool set $i pg_autoscale_mode off ; done # Look to see how much data is being used for pg:s. # Number of PGLog Entries, size of PGLog data in megabytes, and Average size of each PGLog item for i in 0 1 2 ; do echo '*' $i ; osdid=$i ; ceph tell osd.$osdid dump_mempools | jq -r '.mempool.by_pool.osd_pglog | [ .items, .bytes /1024/1024, .bytes / .items ] | @csv' ;done ceph df
Manage crashes
Look at ongoing crashes
ceph health detail
Wipe old crashes if new are not being generated.
ceph crash archive-all
List pools with additional information
ceph osd pool ls detail
ceph auth ls
List users
ceph auth del osd.2
Delete auth
provisioner
openshift-storage.cephfs.csi.ceph.com Ceph File System (POSIX Compliant filesystem) Provisions a volume for ReadWriteMany (RWX) or ReadWriteOnce (RWO) access modes using the Ceph Filesytem configured in a Ceph cluster. openshift-storage.rbd.csi.ceph.com Ceph RBD (Block Device) Provisions a volume for RWO access mode for Ceph RBD, RWO and RWX access mode for block PVC, and RWO access mode for Filesystem PVC. openshift-storage.noobaa.io/obc S3 Bucket (MCG Object Bucket Claim) Provisions an object bucket claim to support S3 API calls through the Multicloud Object Gateway (MCG). The exact storage backing the S3 bucket is dependent on the MCG configuration and the type of deployment.
cephblockpool
oc get cephblockpools
mds(metadata)
View metadata information
ceph mds metadata
pv and rbd
oc get pv -o 'custom-columns=NAME:.spec.claimRef.name,PVNAME:.metadata.name,STORAGECLASS:.spec.storageClassName,VOLUMEHANDLE:.spec.csi.volumeHandle'
rbd
List rador block devices.
rados df | grep -vE '^total|^POOL|^$' | sort -k 4 -n | awk '{print $1}' | while read i ; do echo '*' rbd ls $i ; rbd ls $i ; done
clean bluestore
# Enable rook-ceph toolbox oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]' # Enter toolbox. oc -n openshift-storage rsh $(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) ceph config set osd bluestore_fsck_quick_fix_on_mount true # Delete pods. One at a time. oc get pods -n openshift-storage | grep osd oc delete pod -n openshift-storage -l osd=0 # root-ceph-toolbox remove oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "remove", "path": "/spec/enableCephTools"}] '