How to create ceph Pacific on Centos 8 Stream via Cephadm

Today, we create a ceph network storage on our Centos 8 Stream with cephadm command. We will installing with manual page: https://docs.ceph.com/en/pacific/install/

In this example, we will have three systems (nodes), with identical HW resources (4 GB ram, 4 vCPU, two NICs – one internal for ceph and one for world, and dedicated 4 TB SSD disk for ceph storage). In this article, every command must be run on all nodes. Public network is 192.168.1.0/24 and Ceph separate network is 192.168.2.0/24

So, as a first step, we need to set up our Centos for synchronized time.

Setting up time

As the first step, we must set up a time, I use chrony:

dnf install chrony -y
systemctl enable chronyd
timedatectl set-timezone Europe/Bratislava
timedatectl

Now, edit some variables in configurations file for chronyd. Add some servers from pool, and edit local subnets, where we delived time:

vim /etc/chrony.conf

pool 2.centos.pool.ntp.org iburst
pool 1.centos.pool.ntp.org iburst
pool 3.centos.pool.ntp.org iburst

Now start/restart our service, and check, if it is working:

systemctl restart chronyd
systemctl status chronyd.service
chronyc sources

Create hostnames, ssh rsa-keys and update

Now, we must edit on all nodes our hostnames, set it permanent:

hostnamectl set-hostname ceph1

Now, add all hostnames, and IPs to file /etc/hosts:

tee -a /etc/hosts<<EOF
192.168.1.1    ceph1
192.168.1.2    ceph2
192.168.1.3    ceph3
EOF

Now, create rsa-key pair, for password-less connect to and from each node for root user for installing and updating:

ssh-keygen -t rsa -b 4096 -C "ceph1"

means:
-b bits. Number of bits in the key to create
-t type. Specify type of key to create
-C comment

And copy it to other nodes:

for host in ceph1 ceph2 ceph3; do
 ssh-copy-id root@$host
done

Now update:

dnf update -y
dnf install podman gdisk jq -y
-reboot-

Preparing for ceph

Now, setup a yum/dnf based repository for ceph packages and updates and install package cephadm:

dnf install -y centos-release-ceph-pacific.noarch
dnf install -y cephadm

Now, we can Bootstrap a new cluster. The first step in creating a new Ceph cluster is running the cephadm bootstrap command on the Ceph cluster’s first host. The act of running the cephadm bootstrap command on the Ceph cluster’s first host creates the Ceph cluster’s first “monitor daemon”, and that monitor daemon needs an IP address. You must pass the IP address of the Ceph cluster’s first host to the ceph bootstrap command, so you’ll need to know the IP address of that host.

Now, we can bootstrap our first monitor by command (only on one node!!!):

cephadm bootstrap --mon-ip 192.168.1.1 --cluster-network 192.168.2.0/24

And after some time, we have a working cluster. And we can connect to dashboard.

Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
firewalld ready
Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

	     URL: https://ceph1.example.com:8443/
	    User: admin
	Password: tralala

Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:

	sudo /usr/sbin/cephadm shell --fsid xxx -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:
	ceph telemetry on
For more information see:
	https://docs.ceph.com/docs/pacific/mgr/telemetry/

Now, we can log in with web interface (dashboard). At first login, we use mentioned username and password and we have to change our password.

But our cluster is not finished, yet 🙂

So, we continue. On first node (ceph1), where we bootstrap ceph, we can view status ceph by command:

cephadm shell -- ceph -s

cluster:
    id:     77e12ffa-c017-11ec-9124-c67be67db31c
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph1 (age 26m)
    mgr: ceph1.rgzjga(active, since 23m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

But checking status of ceph by this way is difficulty, so we install ceph-common package by cephadm on every node:

cephadm add-repo --release pacific
cephadm install ceph-common
ceph status

  cluster:
    id:     77e12ffa-c017-11ec-9124-c67be67db31c
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph1 (age 27m)
    mgr: ceph1.rgzjga(active, since 24m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

Now, we copy ceph ssh pubkeys to other hosts for working each-other with ceph and passwordless:

ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3

And now, we can add this nodes to ceph, runnig from first node (where we bootstrap ceph).After this commands, wait some time, for podman to deploy containers (monitor, manager). And label them as admin.

ceph orch host add ceph2 192.168.1.2
ceph orch host add ceph3 192.168.1.3
ceph orch host label add ceph2 _admin
ceph orch host label add ceph3 _admin

Now, we can look, which disks are available for us:

ceph orch device ls
HOST          PATH      TYPE  DEVICE ID                   SIZE  AVAILABLE  REJECT REASONS  
ceph1  /dev/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes                        
ceph2  /dev/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes                        
ceph3  /dev/sda  ssd   QEMU_HARDDISK_drive-scsi2  4000G  Yes      

So, now we can create an OSD disks from these devices.

ceph orch daemon add osd ceph1:/dev/sda
    Created osd(s) 0 on host 'ceph1'

ceph orch daemon add osd ceph2:/dev/sda
    Created osd(s) 1 on host 'ceph2'

ceph orch daemon add osd ceph3:/dev/sda
    Created osd(s) 2 on host 'ceph3'

ceph -s
  cluster:
    id:     77e12ffa-c017-11ec-9124-c67be67db31c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph3,ceph2 (age 8m)
    mgr: ceph1.vsshgj(active, since 8m), standbys: ceph3.ctsxnh
    osd: 3 osds: 3 up (since 21s), 3 in (since 46s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   15 MiB used, 11 TiB / 11 TiB avail
    pgs:     1 active+clean

Now, if we want create a ceph filesystem cephfs, we must create two pool. One for data and one for metadata. so, execute commands below on one ceph node:

ceph osd pool create cephfs_data
ceph osd pool create cephfs_metadata
ceph fs new cephfs cephfs_metadata cephfs_data

Now, log into Ceph dashboard a we can see, that health is RED and there is error.We must create a mds services:

This deploys mds services to our nodes (one become active and one become standby).

Now, we can continue by command line and create an user, which can mount and write to these cephfs:

ceph auth add client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs_data'
ceph auth caps client.cephfs mds 'allow r,allow rw path=/' mon 'allow r' osd 'allow rw pool=cephfs_data' osd 'allow rw pool=cephfs_metadata' 

#and see our caps:
ceph auth get client.cephfs

[client.cephfs]
	key = agvererbrtbrttnrsasda/a5/dd==
	caps mds = "allow r,allow rw path=/"
	caps mon = "allow r"
	caps osd = "allow rw pool=cephfs_data"

Now, we can export or copy out our key and save it to a file. And now, we can mount those cephfs on another linux:

mount -t ceph ceph1.example.com:/ /mnt/cephfs -o name=cephfs,secretfile=/root/cephfs.key -v

df -h
Filesystem                      Size  Used Avail Use% Mounted on
192.168.1.1:/                  3.5T     0  3.5T   0% /mnt/cephfs

If we want check, if we have enabled compression, so, execute:

ceph osd pool get cephfs_data compression_algorithm
     Error ENOENT: option 'compression_algorithm' is not set on pool 'cephfs_data'

ceph osd pool get cephfs_data compression_mode
     Error ENOENT: option 'compression_mode' is not set on pool 'cephfs_data'

If we want compression, enable it and set algorithm. Mode of compression, you can learn: read about – https://docs.ceph.com/en/latest/rados/operations/pools/

We can see, that there are data, but no compression:

ceph df detail

So enable it on both cephfs pools:

ceph osd pool set cephfs_data compression_mode aggressive
ceph osd pool set cephfs_data compression_algorithm lz4
ceph osd pool set cephfs_metadata compression_mode aggressive
ceph osd pool set cephfs_metadata compression_algorithm lz4

# and see:

ceph osd pool get cephfs_data compression_algorithm
      compression_algorithm: lz4
ceph osd pool get cephfs_data compression_mode
      compression_mode: aggressive

And after som copy data, we can see:

Have a nice day

Total Page Visits: 153825 - Today Page Visits: 60

How to create ceph on Centos 8 Stream via Ceph Ansible

I assume, that we have working Centos 8 Stream system. So, in this example, we will have three systems (nodes), with identical HW resources (4 GB ram, 4 vCPU, two NICs – one internal for ceph and one for world, and 10 TB spin-up hdd). In this article, every command must be run on all nodes. Public network is 192.168.1.0/24 and Ceph separate network is 192.168.2.0/24

Setting up time

As the first step, we must set up a time, I use chrony:

dnf install chrony -y
systemctl enable chronyd
timedatectl set-timezone Europe/Bratislava
timedatectl

Now, edit some variables in configurations file for chronyd. Add some servers from pool, and edit local subnets, where we delived time:

vim /etc/chrony.conf

pool 2.centos.pool.ntp.org iburst
pool 1.centos.pool.ntp.org iburst
pool 3.centos.pool.ntp.org iburst

Now start/restart our service, and check, if it is working:

systemctl restart chronyd
systemctl status chronyd.service
chronyc sources

Create hostnames, ssh rsa-keys, update and install som packages

Now, we must edit on all nodes our hostnames, set it permanent:

hostnamectl set-hostname ceph1

Now, add all hostnames, and IPs to file /etc/hosts:

tee -a /etc/hosts<<EOF
192.168.1.1    ceph1
192.168.1.2    ceph2
192.168.1.3    ceph3
192.168.2.1    ceph1-cluster
192.168.2.2    ceph2-cluster
192.168.3.3    ceph3-cluster

EOF

Now, create rsa-key pair, for password-less connect to and from each node:

ssh-keygen -t rsa -b 4096 -C "ceph1"

means:
-b bits. Number of bits in the key to create
-t type. Specify type of key to create
-C comment

And copy it to other nodes:

for host in ceph1 ceph2 ceph3; do
 ssh-copy-id root@$host
done

Now update and install packages:

dnf update -y
-reboot-
dnf install git vim bash-completion python3-pip

Preparing for ceph

Now, install epel repository and enable powertools:

dnf -y install dnf-plugins-core
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf config-manager --set-enabled powertools

dnf repolist
repo id            repo name
appstream          CentOS Stream 8 - AppStream
epel               Extra Packages for Enterprise Linux 8 - x86_64
epel-modular       Extra Packages for Enterprise Linux Modular 8 - x86_64
epel-next          Extra Packages for Enterprise Linux 8 - Next - x86_64
extras             CentOS Stream 8 - Extras
powertools         CentOS Stream 8 - PowerTools

Clone Ceph Ansible repository:

cd /root/
git clone https://github.com/ceph/ceph-ansible.git

Choose ceph-ansible branch you wish to use. The command Syntax is: git checkout $branch

I’ll switch to stable-5.0 which supports Ceph octopus version.

cd ceph-ansible
git checkout stable-5.0

pip3 install setuptools-rust
pip3 install wheel
export CRYPTOGRAPHY_DONT_BUILD_RUST=1
pip3 install --upgrade pip

pip3 install -r requirements.txt
echo "PATH=\$PATH:/usr/local/bin" >>~/.bashrc
source ~/.bashrc

Confirm Ansible version installed.

ansible --version
ansible 2.9.26
  config file = /root/ceph-ansible/ansible.cfg
  configured module search path = ['/root/ceph-ansible/library']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Sep 10 2021, 09:13:53) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]

Now, we find, which OSD (spin-up disks) are ready for us. In each my node, there is free disk /dev/sda. Look via lsblk:

lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                       8:0    0 10.7T  0 disk 
sr0                      11:0    1 1024M  0 rom  
vda                     252:0    0   32G  0 disk 
├─vda1                  252:1    0  512M  0 part /boot
└─vda2                  252:2    0 31.5G  0 part 
  ├─centos--vg0-root    253:0    0    3G  0 lvm  /
  ├─centos--vg0-swap    253:1    0    1G  0 lvm  [SWAP]
  ├─centos--vg0-tmp     253:2    0  512M  0 lvm  /tmp
  ├─centos--vg0-var_log 253:3    0  512M  0 lvm  /var/log
  ├─centos--vg0-var     253:4    0    3G  0 lvm  /var
  └─centos--vg0-home    253:5    0    2G  0 lvm  /home

Now, we are ready for installation of Ceph

Deploy Ceph Octopus (15) Cluster on CentOS 8 stream

Now, we are play some things 🙂 The first node (ceph1) I use as admin node for installation. Configure Ansible Inventory and Playbook files. Create Ceph Cluster group variables file on the admin node

cd /root/ceph-ansible
cp group_vars/all.yml.sample  group_vars/all.yml
vim group_vars/all.yml

And edit some variables of your Ceph cluster, as you fit:

#General
cluster: ceph

# Inventory host group variables
mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mds_group_name: mdss
nfs_group_name: nfss
rbdmirror_group_name: rbdmirrors
client_group_name: clients
iscsi_gw_group_name: iscsigws
mgr_group_name: mgrs
rgwloadbalancer_group_name: rgwloadbalancers
grafana_server_group_name: grafana-server

# Firewalld / NTP
configure_firewall: True
ntp_service_enabled: true
ntp_daemon_type: chronyd

# Ceph packages
ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: octopus

# Interface options
monitor_interface: ens18
radosgw_interface: ens18
public_network: 192.168.1.0/24
cluster_network: 192.168.2.0/24


# DASHBOARD
dashboard_enabled: True
dashboard_protocol: http
dashboard_admin_user: admin
dashboard_admin_password: strongpass

grafana_admin_user: admin
grafana_admin_password: strongpass

Now, set your OSDs. Create a new ceph nodes ansible inventory. Properly set your inventory file. Below is my inventory. Modify inventory groups the way you want services installed in your cluster nodes.

vim hosts

# Ceph admin user for SSH and Sudo
[all:vars]
ansible_ssh_user=root
ansible_become=true
ansible_become_method=sudo
ansible_become_user=root

# Ceph Monitor Nodes
[mons]
ceph1
ceph2
ceph3

# MDS Nodes
[mdss]
ceph1
ceph2
ceph3

# RGW
[rgws]
ceph1
ceph2
ceph3

# Manager Daemon Nodes
[mgrs]
ceph1
ceph2
ceph3

# set OSD (Object Storage Daemon) Node
[osds]
ceph1
ceph2
ceph3

# Grafana server
[grafana-server]
ceph1

Create Playbook file by copying a sample playbook at the root of the ceph-ansible project called site.yml.sample.

cp site.yml.sample site.yml 

Run Playbook.

ansible-playbook -i hosts site.yml 

If installation was successful, a health check should return OK or minimal WARN.

# ceph -s
  cluster:
    id:     dcfd26f5-49e9-4256-86c2-a5a0deac7b54
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 3 daemons, quorum eu-ceph1,eu-ceph2,eu-ceph3 (age 67m)
    mgr: ceph2(active, since 55m), standbys: ceph3, ceph1
    mds: cephfs:1 {0=ceph1=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 60m), 3 in (since 60m)
    rgw: 3 daemons active (ceph1.rgw0, ceph2.rgw0, ceph3.rgw0)
 
  task status:
 
  data:
    pools:   7 pools, 169 pgs
    objects: 215 objects, 11 KiB
    usage:   3.1 GiB used, 32 TiB / 32 TiB avail
    pgs:     169 active+clean

This is a screenshot of my installation output once it has been completed.

As you see, I have warning: mons are allowing insecure global_id reclaim

So, silent it, as you fit it, or fix…

ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
Total Page Visits: 153825 - Today Page Visits: 60