{"id":875,"date":"2022-04-20T13:34:20","date_gmt":"2022-04-20T11:34:20","guid":{"rendered":"https:\/\/www.gonscak.sk\/?p=875"},"modified":"2022-04-22T15:20:35","modified_gmt":"2022-04-22T13:20:35","slug":"how-to-create-ceph-pacific-on-centos-8-stream-via-cephadm","status":"publish","type":"post","link":"https:\/\/www.gonscak.sk\/?p=875","title":{"rendered":"How to create ceph Pacific on Centos 8 Stream via Cephadm"},"content":{"rendered":"\n<p>Today, we create a ceph network storage on our Centos 8 Stream with cephadm command. We will installing with manual page: <a href=\"https:\/\/docs.ceph.com\/en\/pacific\/install\/\">https:\/\/docs.ceph.com\/en\/pacific\/install\/<\/a><\/p>\n\n\n\n<p>In this example, we will have three systems (nodes), with identical HW resources (4 GB ram, 4 vCPU, two NICs \u2013 one internal for ceph and one for world, and dedicated 4 TB SSD disk for ceph storage). In this article, every command must be run on all nodes. Public network is 192.168.1.0\/24 and Ceph separate network is 192.168.2.0\/24<\/p>\n\n\n\n<p>So, as a first step, we need to set up our Centos for synchronized time. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Setting up time<\/h2>\n\n\n\n<p>As the first step, we must set up a time, I use chrony:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">dnf install chrony -y\nsystemctl enable chronyd\ntimedatectl set-timezone Europe\/Bratislava\ntimedatectl<\/pre>\n\n\n\n<p>Now, edit some variables in configurations file for chronyd. Add some servers from pool, and edit local subnets, where we delived time:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vim \/etc\/chrony.conf\n\npool 2.centos.pool.ntp.org iburst\npool 1.centos.pool.ntp.org iburst\npool 3.centos.pool.ntp.org iburst<\/pre>\n\n\n\n<p>Now start\/restart our service, and check, if it is working:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">systemctl restart chronyd\nsystemctl status chronyd.service\nchronyc sources<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Create hostnames, ssh rsa-keys and update<\/h2>\n\n\n\n<p>Now, we must edit on all nodes our hostnames, set it permanent:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">hostnamectl set-hostname ceph1<\/pre>\n\n\n\n<p>Now, add all hostnames, and IPs to file \/etc\/hosts:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">tee -a \/etc\/hosts&lt;&lt;EOF\n192.168.1.1    ceph1\n192.168.1.2    ceph2\n192.168.1.3    ceph3\nEOF<\/pre>\n\n\n\n<p>Now, create rsa-key pair, for password-less connect to and from each node for root user for installing and updating:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ssh-keygen -t rsa -b 4096 -C \"ceph1\"\n\nmeans:\n-b bits. Number of bits in the key to create\n-t type. Specify type of key to create\n-C comment<\/pre>\n\n\n\n<p>And copy it to other nodes:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">for host in ceph1 ceph2 ceph3; do\n ssh-copy-id root@$host\ndone<\/pre>\n\n\n\n<p>Now update:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">dnf update -y\ndnf install podman gdisk jq -y\n-reboot-<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Preparing for ceph<\/h2>\n\n\n\n<p>Now, setup a yum\/dnf based repository for ceph packages and updates and install package cephadm:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">dnf install -y centos-release-ceph-pacific.noarch\ndnf install -y cephadm<\/pre>\n\n\n\n<p>Now, we can Bootstrap a new cluster. The first step in creating a new Ceph cluster is running the <code>cephadm bootstrap<\/code> command on the Ceph cluster\u2019s first host. The act of running the <code>cephadm bootstrap<\/code> command on the Ceph cluster\u2019s first host creates the Ceph cluster\u2019s first \u201cmonitor daemon\u201d, and that monitor daemon needs an IP address. You must pass the IP address of the Ceph cluster\u2019s first host to the <code>ceph bootstrap<\/code> command, so you\u2019ll need to know the IP address of that host.<\/p>\n\n\n\n<p>Now, we can bootstrap our first monitor by command (only on one node!!!):<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cephadm bootstrap --mon-ip 192.168.1.1 --cluster-network 192.168.2.0\/24<\/pre>\n\n\n\n<p>And after some time, we have a working cluster. And we can connect to dashboard.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Generating a dashboard self-signed certificate...\nCreating initial admin user...\nFetching dashboard port number...\nfirewalld ready\nEnabling firewalld port 8443\/tcp in current zone...\nCeph Dashboard is now available at:\n\n\t     URL: https:\/\/ceph1.example.com:8443\/\n\t    User: admin\n\tPassword: tralala\n\nEnabling client.admin keyring and conf on hosts with \"admin\" label\nYou can access the Ceph CLI with:\n\n\tsudo \/usr\/sbin\/cephadm shell --fsid xxx -c \/etc\/ceph\/ceph.conf -k \/etc\/ceph\/ceph.client.admin.keyring\n\nPlease consider enabling telemetry to help improve Ceph:\n\tceph telemetry on\nFor more information see:\n\thttps:\/\/docs.ceph.com\/docs\/pacific\/mgr\/telemetry\/<\/pre>\n\n\n\n<p>Now, we can log in with web interface (dashboard). At first login, we use mentioned username and password and we have to change our password. <\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"951\" height=\"438\" src=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image.png\" alt=\"\" class=\"wp-image-883\" srcset=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image.png 951w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-300x138.png 300w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-768x354.png 768w\" sizes=\"auto, (max-width: 951px) 100vw, 951px\" \/><\/figure>\n\n\n\n<p>But our cluster is not finished, yet \ud83d\ude42<\/p>\n\n\n\n<p>So, we continue. On first node (ceph1), where we bootstrap ceph, we can view status ceph by command:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cephadm shell -- ceph -s\n\ncluster:\n    id:     77e12ffa-c017-11ec-9124-c67be67db31c\n    health: HEALTH_WARN\n            OSD count 0 &lt; osd_pool_default_size 3\n \n  services:\n    mon: 1 daemons, quorum ceph1 (age 26m)\n    mgr: ceph1.rgzjga(active, since 23m)\n    osd: 0 osds: 0 up, 0 in\n \n  data:\n    pools:   0 pools, 0 pgs\n    objects: 0 objects, 0 B\n    usage:   0 B used, 0 B \/ 0 B avail\n    pgs:     <\/pre>\n\n\n\n<p>But checking status of ceph by this way is difficulty, so we install ceph-common package by cephadm on every node:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cephadm add-repo --release pacific\ncephadm install ceph-common\nceph status\n\n  cluster:\n    id:     77e12ffa-c017-11ec-9124-c67be67db31c\n    health: HEALTH_WARN\n            OSD count 0 &lt; osd_pool_default_size 3\n \n  services:\n    mon: 1 daemons, quorum ceph1 (age 27m)\n    mgr: ceph1.rgzjga(active, since 24m)\n    osd: 0 osds: 0 up, 0 in\n \n  data:\n    pools:   0 pools, 0 pgs\n    objects: 0 objects, 0 B\n    usage:   0 B used, 0 B \/ 0 B avail\n    pgs:     <\/pre>\n\n\n\n<p>Now, we copy ceph ssh pubkeys to other hosts for working each-other with ceph and passwordless:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ssh-copy-id -f -i \/etc\/ceph\/ceph.pub root@ceph2\nssh-copy-id -f -i \/etc\/ceph\/ceph.pub root@ceph3<\/pre>\n\n\n\n<p>And now, we can add this nodes to ceph, runnig from first node (where we bootstrap ceph).After this commands, wait some time, for podman to deploy containers (monitor, manager). And label them as admin.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph orch host add ceph2 192.168.1.2\nceph orch host add ceph3 192.168.1.3\nceph orch host label add ceph2 _admin\nceph orch host label add ceph3 _admin<\/pre>\n\n\n\n<p>Now, we can look, which disks are available for us:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph orch device ls\nHOST          PATH      TYPE  DEVICE ID                   SIZE  AVAILABLE  REJECT REASONS  \nceph1  \/dev\/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes                        \nceph2  \/dev\/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes                        \nceph3  \/dev\/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes      <\/pre>\n\n\n\n<p>So, now we can create an OSD disks from these  devices.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph orch daemon add osd ceph1:\/dev\/sda\n    Created osd(s) 0 on host 'ceph1'\n\nceph orch daemon add osd ceph2:\/dev\/sda\n    Created osd(s) 1 on host 'ceph2'\n\nceph orch daemon add osd ceph3:\/dev\/sda\n    Created osd(s) 2 on host 'ceph3'\n\nceph -s\n  cluster:\n    id:     77e12ffa-c017-11ec-9124-c67be67db31c\n    health: HEALTH_OK\n \n  services:\n    mon: 3 daemons, quorum ceph1,ceph3,ceph2 (age 8m)\n    mgr: ceph1.vsshgj(active, since 8m), standbys: ceph3.ctsxnh\n    osd: 3 osds: 3 up (since 21s), 3 in (since 46s)\n \n  data:\n    pools:   1 pools, 1 pgs\n    objects: 0 objects, 0 B\n    usage:   15 MiB used, 11 TiB \/ 11 TiB avail\n    pgs:     1 active+clean<\/pre>\n\n\n\n<p>Now, if we want create a ceph filesystem cephfs, we must create two pool. One for data and one for metadata.  so, execute commands below on one ceph node:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph osd pool create cephfs_data\nceph osd pool create cephfs_metadata\nceph fs new cephfs cephfs_metadata cephfs_data<\/pre>\n\n\n\n<p>Now, log into Ceph dashboard a we can see, that health is RED and there is error.We must create a mds services:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"424\" src=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-3-1024x424.png\" alt=\"\" class=\"wp-image-894\" srcset=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-3-1024x424.png 1024w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-3-300x124.png 300w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-3-768x318.png 768w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-3.png 1141w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>This deploys mds services to our nodes (one become active and one become standby). <\/p>\n\n\n\n<p>Now, we can continue by command line and create an user, which can mount and write to these cephfs:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph auth add client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs_data'\nceph auth caps client.cephfs mds 'allow r,allow rw path=\/' mon 'allow r' osd 'allow rw pool=cephfs_data' osd 'allow rw pool=cephfs_metadata' \n\n#and see our caps:\nceph auth get client.cephfs\n\n[client.cephfs]\n\tkey = agvererbrtbrttnrsasda\/a5\/dd==\n\tcaps mds = \"allow r,allow rw path=\/\"\n\tcaps mon = \"allow r\"\n\tcaps osd = \"allow rw pool=cephfs_data\"<\/pre>\n\n\n\n<p>Now, we can export or copy out our key and save it to a file. And now, we can mount those cephfs on another linux:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">mount -t ceph ceph1.example.com:\/ \/mnt\/cephfs -o name=cephfs,secretfile=\/root\/cephfs.key -v\n\ndf -h\nFilesystem                      Size  Used Avail Use% Mounted on\n192.168.1.1:\/                  3.5T     0  3.5T   0% \/mnt\/cephfs<\/pre>\n\n\n\n<p>If we want check, if we have enabled compression, so, execute:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph osd pool get cephfs_data compression_algorithm\n     Error ENOENT: option 'compression_algorithm' is not set on pool 'cephfs_data'\n\nceph osd pool get cephfs_data compression_mode\n     Error ENOENT: option 'compression_mode' is not set on pool 'cephfs_data'<\/pre>\n\n\n\n<p>If we want compression, enable it and set algorithm. Mode of compression, you can learn: read about &#8211; https:\/\/docs.ceph.com\/en\/latest\/rados\/operations\/pools\/<\/p>\n\n\n\n<p>We can see, that there are data, but no compression:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph df detail<\/pre>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"82\" src=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/ceph1-1024x82.png\" alt=\"\" class=\"wp-image-897\" srcset=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/ceph1-1024x82.png 1024w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/ceph1-300x24.png 300w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/ceph1-768x62.png 768w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/ceph1.png 1534w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>So enable it on both cephfs pools:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">ceph osd pool set cephfs_data compression_mode aggressive\nceph osd pool set cephfs_data compression_algorithm lz4\nceph osd pool set cephfs_metadata compression_mode aggressive\nceph osd pool set cephfs_metadata compression_algorithm lz4\n\n# and see:\n\nceph osd pool get cephfs_data compression_algorithm\n      compression_algorithm: lz4\nceph osd pool get cephfs_data compression_mode\n      compression_mode: aggressive<\/pre>\n\n\n\n<p>And after som copy data, we can see:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"25\" src=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-4-1024x25.png\" alt=\"\" class=\"wp-image-899\" srcset=\"https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-4-1024x25.png 1024w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-4-300x7.png 300w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-4-768x19.png 768w, https:\/\/www.gonscak.sk\/wp-content\/uploads\/2022\/04\/image-4.png 1529w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Have a nice day<\/p>\n ","protected":false},"excerpt":{"rendered":"<p>Today, we create a ceph network storage on our Centos 8 Stream with cephadm command. We will installing with manual page: https:\/\/docs.ceph.com\/en\/pacific\/install\/ In this example, we will have three systems (nodes), with identical HW resources (4 GB ram, 4 vCPU, two NICs \u2013 one internal for ceph and one for world, and dedicated 4 TB &hellip; <a href=\"https:\/\/www.gonscak.sk\/?p=875\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">How to create ceph Pacific on Centos 8 Stream via Cephadm<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[116,139,144,145,146,115],"class_list":["post-875","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-centos-8","tag-ceph","tag-cephadm","tag-dashboard","tag-pacific","tag-selinux"],"_links":{"self":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts\/875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=875"}],"version-history":[{"count":17,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts\/875\/revisions"}],"predecessor-version":[{"id":901,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts\/875\/revisions\/901"}],"wp:attachment":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}