Configuring a remote logging solution – rsyslog

https://unixcop.com/how-to-install-syslog-server-and-client-centos8/

With so much complex information produced by multiple applications and systems, administrators need a way to review the details, so they can understand the cause of problems or plan appropriately for the future.

The Syslog (System Logging Protocol) system on the server can act as a central log monitoring point over a network where all servers, network devices, switches, routers and internal services that create logs, whether linked to the particular internal issue or just informative messages can send their logs.

To ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server.

The Rsyslog application, in combination with the systemd-journald service, provides local and remote logging support.

In order to set up a centralized log server on a CentOS/RHEL 8 server, you need to check an confirm that the /var partition has enough space (a few GB minimum) to store all recorded log files on the system that send by other devices on the network. I recommend you to have a separate drive (LVM or RAID) to mount the /var/log/ directory.

Rsyslog service is installed and running automatically in CentOS/RHEL 8 server. In order to verify that the daemon is running in the system, run the following command

systemctl status rsyslog.service

If the service is not running by default, run the following command to start rsyslog daemon:

systemctl start rsyslog.service

Now, in the /etc/rsyslog.conf configuration file, find and uncomment the following lines to grant UDP transport reception to the Rsyslog server via 514 port. Rsyslog uses the standard UDP protocol for log transmission.

vim /etc/rsyslog.conf

module(load="imudp") # needs to be done just once
input(type="imudp" port="514")

The UDP protocol doesn’t have the TCP overhead, and it makes data transmission faster than the TCP protocol. On the other hand, the UDP protocol doesn’t guarantee the reliability of transmitted data.

However, if you want to use TCP protocol for log reception you must find and uncomment the following lines in the /etc/rsyslog.conf the configuration file in order to configure Rsyslog daemon to bind and listen to a TCP socket on 514 port.

module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")

Now create a new template for receiving remote messages, as this template will guide the local Rsyslog server, where to save the received messages send by Syslog network clients:

$template RemoteLogs,"/var/log/%PROGRAMNAME%.log" 
*.* ?RemoteLogs

The $template RemoteLogs directive guides Rsyslog daemon to gather and write all of the transmitted log messages to distinct files, based on the client name and remote client application that created the messages based on the outlined properties added in the template configuration: %PROGRAMNAME%.

All received log files will be written to the local filesystem to an allocated file named after the client machine’s hostname and kept in /var/log/ directory.

The & ~ redirect rule directs the local Rsyslog server to stop processing the received log message further and remove the messages (not write them to internal log files).

The RemoteLogs is an arbitrary name given to this template directive. You can use whatever name you want that best suitable for your template.

To configure more complex Rsyslog templates, read the Rsyslog configuration file manual by running the man rsyslog.conf command or consult Rsyslog online documentation.

Now, we exit our configuration file and restart the daemon:

service rsyslog restart

systemctl status rsyslog.service
....
Feb 25 14:43:06 syslog-new rsyslogd[1668]: imjournal: journal files changed, reloading... 

Once you restarted the Rsyslog server, it should now act as a centralized log server and record messages from Syslog clients. To confirm the Rsyslog network sockets, run netstat command:

netstat -tulpn | grep rsyslog 

If we don’t have this command, run this, to find out, which packages have it:

dnf provides netstat

Last metadata expiration check: 2:06:49 ago on Fri 25 Feb 2022 12:38:49 PM CET.
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo        : baseos
Matched from:
Filename    : /usr/bin/netstat

So, install package and run it again:

dnf install net-tools -y
netstat -tulpn | grep rsyslog 
tcp        0      0 0.0.0.0:514             0.0.0.0:*               LISTEN      1668/rsyslogd       
tcp6       0      0 :::514                  :::*                    LISTEN      1668/rsyslogd       
udp        0      0 0.0.0.0:514             0.0.0.0:*                           1668/rsyslogd       
udp6       0      0 :::514                  :::*                                1668/rsyslogd      

To add firewall exceptions for this port, execute following:

firewall-cmd --permanent --add-service=syslog
firewall-cmd --permanent --add-port=514/tcp #if enabled tcp socket
firewall-cmd --reload

And now, we can set remote logging on others servers or hardware (switches)…

On mikrotiks set remote IP for logging, like this:

/system/logging/action
set remote remote=10.10.10.10
/system/logging/
add action=remote topics=info
add action=remote topics=critical
add action=remote topics=error
add action=remote topics=warning

On cisco switches SG350X:

configure
logging host 10.10.10.10 description syslog.example.sk 
logging origin-id hostname

Or we can set on webserver, to sent apache logs to our syslog. First, set apache httpd.conf and vhost conf:

vim /etc/httpd/conf/httpd.conf

ErrorLog "||/usr/bin/logger -t apache -i -p local4.info"
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

vim /etc/httpd/conf.d/0-vhost.conf
...
    LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" vhostform
    CustomLog "||/usr/bin/logger -t apache -i -p local4.info" vhostform

And now, set on webserver rsyslog, to sent logs to our syslog, :

vim /etc/rsyslog.conf

#### OWN RULES ####
local4.info                                             @@10.10.10.10

Now, we can see, that our logs are on syslog. There, we can set filter to separate logs into own directories and files based on hostname, IPs… This I explain later (we alter rsyslog to syslog-ng package.

Total Page Visits: 153400 - Today Page Visits: 67

How to install nextcloud v18 on Centos 8 Stream

I create a basic installation of Centos 8 stream from iso: CentOS-Stream-8-x86_64-20191219-boot.iso

During installation I choose minimal applications and standard utilities. Please, enable, network time and set lvm for virtio disk. I set password for root and create a new user, which have root privileges.

After instalation, I create and LVM encrypted partition, to store encrypted data of nextcloud on it. I will not use nextcloud data encryption. Command below creates encrypted disk. We must enter a passphrase twice

 cryptsetup -y -v luksFormat /dev/vdb

Now, we open this partition and look at status:

cryptsetup luksOpen /dev/vdb vdb_crypt
cryptsetup -v status vdb_crypt

/dev/mapper/vdb_crypt is active.
   type:    LUKS2
   cipher:  aes-xts-plain64
   keysize: 512 bits
   key location: keyring
   device:  /dev/vdb
   sector size:  512
   offset:  32768 sectors
   size:    209682432 sectors
   mode:    read/write
 Command successful.

Now, I write 4GB zeros to this device to see, if everything is OK. It is possible, to full-up tho whole device, but it can take a long time. But the true reason is, that this will allocate block data with zeros. This ensures that outside world will see this as random data i.e. it protect against disclosure of usage patterns.

dd if=/dev/zero of=/dev/mapper/vdb_crypt bs=4M count=1000
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 130.273 s, 32.2 MB/s

Now try close and open this encrypted device. And then, I create an lvm above the luks encrypted disk:

cryptsetup luksClose vdb_crypt
cryptsetup luksOpen /dev/vdb vdb_crypt
cryptsetup -v status vdb_crypt
pvcreate /dev/mapper/vdb_crypt
vgcreate nextcloud /dev/mapper/vdb_crypt
lvcreate -n data -L+30G nextcloud
mkdir /mnt/test
mkfs.xfs /dev/mapper/nextcloud-data
mount /dev/mapper/nextcloud-data /mnt/test/
touch /mnt/test/hello 
ll /mnt/test/hello
umount /mnt/test/

Installing nextcloud and prerequisites

And now, we can start with preparing our Centos for nextcloud

At first, update system. Via dnf (DNF is the next upcoming major version of YUM, a package manager for RPM-based Linux distributions. It roughly maintains CLI compatibility with YUM and defines a strict API for extensions and plugins.)

dnf update -y

Next, we install and create empty database for our nextcloud. Then we start it and enable for autostart after boot.
If you wish, you can skip installations of MariaDB and you can use built-in SQLite. Then you can continue with installing apache web server.

dnf -y install mariadb-server
...
systemctl start mariadb
systemctl enable mariadb

Now, we run post installation script to finish setting up mariaDB server:

mysql_secure_installation
Set root password? [Y/n] y
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

Now, we can create a database for nextcloud.

mysql -u root -p
...
CREATE DATABASE nextcloud;
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
FLUSH PRIVILEGES;
exit;

Now, we install Apache web server, and we start it and enable for autostart after boot:

dnf install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

And set up firewall fow port http/80 and ssh/20 only:

systemctl status httpd
firewall-cmd --list-all
firewall-cmd --zone=public --permanent --remove-service=dhcpv6-client
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --reload

Now point your browser to this server and look, if you see a Apache test page.

Now we can install php. Nextcloud (at this time is version 18.0.1) and support PHP (7.1, 7.2 or 7.3). So I use remi repositories and install php 7.3:

dnf -y install dnf-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm
dnf module list php
dnf module reset php
dnf module enable php:remi-7.3
dnf info php
dnf install php php-gd php-mbstring php-intl php-pecl-apcu php-mysqlnd php-pecl-imagick.x86_64 php-ldap php-pecl-zip.x86_64 php-process.x86_64
php -v
php --ini |grep Loaded
sed -i "s/post_max_size = 8M/post_max_size = 500M/" /etc/php.ini
sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 500M/" /etc/php.ini
sed -i "s/memory_limit = 128M/memory_limit = 512M/" /etc/php.ini
systemctl start php-fpm.service
systemctl enable php-fpm.service

And now, we can install nextcloud:

mkdir -p /var/www/html/nextcloud/data
cd /var/www/html/nextcloud/
mount /dev/mapper/nextcloud-data /var/www/html/nextcloud/data/
wget https://download.nextcloud.com/server/releases/nextcloud-18.0.1.zip
unzip nextcloud-18.0.1.zip
rm nextcloud-18.0.1.zip
mv nextcloud/* .
mv nextcloud/.htaccess .
mv nextcloud/.user.ini .
rmdir nextcloud/
mkdir /var/www/html/nextcloud/data
chown -R apache:apache /var/www/html/nextcloud/
find /var/www/html/nextcloud/ -type d -exec chmod 750 {} \; 
find /var/www/html/nextcloud/ -type f -exec chmod 640 {} \;

Now create configuration file for nextcloud in httpd:

vim /etc/httpd/conf.d/nextcloud.conf
<VirtualHost *:80>
  DocumentRoot /var/www/html/nextcloud/
  ServerName  your.server.com

  <Directory /var/www/html/nextcloud/>
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews

    <IfModule mod_dav.c>
      Dav off
    </IfModule>

  </Directory>
</VirtualHost>
apachectl graceful

Refer to nextcloud admin manual, you can run into permissions problems. Run these commands as root to adjust permissions:

semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'
restorecon -Rv '/var/www/html/nextcloud/'

If you see error “-bash: semanage: command not found”, install packages:

dnf provides /usr/sbin/semanage
dnf install policycoreutils-python-utils-2.9-3.el8_1.1.noarch

Now, we can check via built-in php scripts, in what state we are:

cd /var/www/html/nextcloud/
sudo -u apache php occ -h
sudo -u apache php occ -V
sudo -u apache php occ status

And finally, we can access our nextcloud and set up administrators password via our web: http://you-ip/

If you see default httpd welcome page, disable all lines in: /etc/httpd/conf.d/welcome.conf
Now you must complete the installation via web interface. Set Administrator’s password and locate to MariaDB with used credentials:

Database user: nextclouduser
Database password: YOURPASSWORD
Database name: nextcloud
host: localhost

In settings of nextcloud, go to section Administration > Overview. You can see some problems. If so, try to fix it. I had three problems. No apcu memory cache configured. So add at nextcloud config.php:

'memcache.local' => '\OC\Memcache\APCu',

Then I must edit som php variables, to set properly opcache: edit and adjust:

vim /etc/php.d/10-opcache.ini

Then I must edit httpd setting, because .htaccess wont working. So change apache config:

vim /etc/httpd/conf/httpd.conf

section: Directory "/var/www/html"
AllowOverride None
change to: 
AllowOverride All

And gracefuly restart apache:

apachectl graceful

Next, I find out, that my nextcloud instance cannot connect to internet and checks for update. I think, that this is on selinux (enforcing mode). So run check and find out, what is happening:

sealert -a /var/log/audit/audit.log

And the result:

SELinux is preventing /usr/sbin/php-fpm from name_connect access on the tcp_socket port 80
Additional Information:
Source Context                system_u:system_r:httpd_t:s0
Source Path                   /usr/sbin/php-fpm
Port                          80
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
---------
If you believe that php-fpm should be allowed name_connect access on the port 80 tcp_socket by default.
If you want to allow httpd to can network connect
Then you must tell SELinux about this by enabling the 'httpd_can_network_connect' boolean.

So I allow httpd to can network connect via:

setsebool -P httpd_can_network_connect 1

And that is complete. If you wont secure http (https), try to find out another post on this page.

Have fun

Total Page Visits: 153400 - Today Page Visits: 67

Encrypted LVM partition on software raid-1 with mdadm

At another post https://www.gonscak.sk/?p=201 I posted how to create raid1 software raid with mdadm in linux. Now I tried to add a crypted filesystem to this.

First, check, that we have working software raid:

sudo mdadm --misc --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Wed Aug 22 09:34:23 2018
        Raid Level : raid1
        Array Size : 1953381440 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Thu Aug 23 14:18:50 2018
             State : active 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : bitmap
              Name : gw36:0  (local to host gw36)
              UUID : ded4f30e:1cfb20cb:c10b843e:df19a8ff
            Events : 3481
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Now, we synced drives and clean. It is time to encrypt.  If we have not loaded modules for encryption, load it:q

modprobe dm-crypt

Now create the volume with passphrase:

sudo cryptsetup --cipher=aes-xts-plain --verify-passphrase --key-size=512 luksFormat /dev/md0

And we can open it:

sudo cryptsetup  luksOpen /dev/md0 cryptdisk

Now we can create as many times a physical volume, volume group and logical volume.

sudo pvcreate /dev/mapper/cryptdisk
sudo vgcreate raid1 /dev/mapper/cryptdisk
sudo lvcreate --size 500G --name lv-home raid1

sudo pvs
  PV                     VG        Fmt  Attr PSize    PFree
  /dev/mapper/cryptdisk  raid1     lvm2 a--    <1,82t 1,33t
sudo vgs
  VG        #PV #LV #SN Attr   VSize    VFree
  raid1       1   1   0 wz--n-   <1,82t 1,33t
sudo lvs
  LV      VG        Attr       LSize
  lv-home raid1     -wi-ao---- 500,00g            

Next, we create a filesystem on this logical volume:

sudo mkfs.ext4 /dev/mapper/raid1-lv--home

And we can mount it:

sudo mount /dev/mapper/raid1-lv--home crypt-home/

Now we have an encrypted partition (disk) for our home directory.

Total Page Visits: 153400 - Today Page Visits: 67

How to resize Physical volume and shrink disk partition

I Installed proxmox environment on Intel 240GB SSD. Installation take the whole disk for lvm. So I need to reduce the used space and create a new partition for drbd.
This is my disk. You can see, that the disk is full allocated with 171G free.

root@pve1:/# gdisk -l /dev/sda
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Number Start (sector) End (sector) Size Code Name
 1      34          2047        1007.0  KiB EF02
 2      2048        262143      127.0   MiB EF00
 3      262144      111411199   53.0    GiB 8E00 Linux LVM
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g
root@pve1:/# vgs
 VG #PV #LV #SN Attr VSize VFree
 pve 1 3 0 wz--n- 223.44g 171.44g
root@pve1:/# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 data pve -wi-ao--- 40.00g
 root pve -wi-ao--- 10.00g
 swap pve -wi-ao--- 2.00g

So, we list our logical volumes with segments on physical volume /dev/sda3:

root@pve1:/# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 3072 10240 data 0 linear /dev/sda3:3072-13311
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 13312 43889 0 free

We can see, the size of PV is 223,44G and we have free 171,44G. So, we must shrink this physical volume about 171,44G. So compute the space for size of this physical volume: 223,44 – 171,44 = 52G. So, our PV must have at least 52G. Next, we resize this pv:

root@pve1:/# pvresize --setphysicalvolumesize 52G /dev/sda3
 /dev/sda3: cannot resize to 13311 extents as 13312 are allocated.
 0 physical volume(s) resized / 1 physical volume(s) not resized
root@pve1:/# pvresize --setphysicalvolumesize 52.1G /dev/sda3
 Physical volume "/dev/sda3" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized

As we can see, we cannost shrink exact to this space. So we add 100M and use the 52,1G size. Now we can see:

root@pve1:/# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 3072 10240 data 0 linear /dev/sda3:3072-13311
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 13312 25 0 free

At this point, we must work on the lowest layer of disk, so we must delete this partition and create a new one. The new partition must start on the same sector as previous and the last sector must be after last segment of physical volume. I use gdisk, because my disk have GPT partition table:

root@pve1:/# gdisk /dev/sda
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
First usable sector is 34, last usable sector is 468862094
Number Start (sector) End (sector) Size Code Name
 1 34 2047 1007.0 KiB EF02
 2 2048 262143 127.0 MiB EF00
 3 262144 468862094 223.4 GiB 8E00
Command (? for help): d
Partition number (1-3): 3
Command (? for help): n
Partition number (3-128, default 3):
First sector (262144-468862094, default = 262144) or {+-}size{KMGTP}:
Last sector (262144-468862094, default = 468862094) or {+-}size{KMGTP}: +53G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8E00
Changed type of partition to 'Linux LVM'
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Total free space is 357450895 sectors (170.4 GiB)
Number Start (sector) End (sector) Size         Code Name
 1     34             2047         1007.0 KiB   EF02
 2     2048           262143       127.0 MiB    EF00
 3     262144         111411199    53.0 GiB     8E00 Linux LVM
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

Now, we must reboot our computer to use new partition table. And after reboot, use this command to resize physical volume on partition /dev/sda3

root@pve1:/# pvresize /dev/sda3
 Physical volume "/dev/sda3" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 53.00g 1020.00m

Now, if we can use all of free space for the logical volume “data”, we can resize it to whole free space, like this:

root@pve1:/# lvresize /dev/pve/data -l +100%FREE
 Extending logical volume data to 41.00 GiB
 Logical volume data successfully resized
 root@pve1:/# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 data pve -wi-ao--- 41.00g
 root pve -wi-ao--- 10.00g
 swap pve -wi-ao--- 2.00g
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 53.00g 0

Now, we can create a new partition at the end of disk:

gdisk /dev/sda
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Number Start (sector) End (sector) Size Code Name
 1 34 2047 1007.0 KiB EF02
 2 2048 262143 127.0 MiB EF00
 3 262144 111411199 53.0 GiB 8E00 Linux LVM
Command (? for help): n
Partition number (4-128, default 4):
First sector (111411200-468862094, default = 111411200) or {+-}size{KMGTP}:
Last sector (111411200-468862094, default = 468862094) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
root@pve1:~# gdisk -l /dev/sda
Number Start (sector) End (sector) Size         Code Name
 1     34             2047         1007.0 KiB   EF02
 2     2048           262143       127.0 MiB    EF00
 3     262144         111411199    53.0 GiB     8E00 Linux LVM
 4     111411200      468862094    170.4 GiB    8300 Linux filesystem

And if we list details about physical volume, we can see, that there is no free space:

root@pve1:~# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 53.00g 0 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 53.00g 0 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 53.00g 0 3072 10495 data 0 linear /dev/sda3:3072-13566

And what is drbd, you can see in another post on my page. Have a fun.

Total Page Visits: 153400 - Today Page Visits: 67

how to set up drbd primary-primary mode on proxmox 4.x

Today, I met with an interesting problem. I tried to create a primary-primary (dual primary) DRBD cluster on proxmox.
The first we must have fully configured proxmox Two-node cluster. Like this:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster
We must have a good configuration of /etc/hosts to resolve names into IP:

root@cl3-amd-node1:/etc/drbd.d# cat /etc/hosts
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.104 cl3-amd-node1 pvelocalhost
192.168.1.108 cl3-amd-node2
root@cl3-amd-node2:/etc/drbd.d# cat /etc/hosts
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.104 cl3-amd-node1
192.168.1.108 cl3-amd-node2 pvelocalhost

One server was created on hardware raid PCI-E LSI 9240-4i (/dev/sdb) and second server was build on software raid via mdadm (/dev/md1) on debian jessie with installation with proxmox packages. So the backend for drbd devices was on one side – hardware raid and software raid on the other side.  We must create a two disks with the same size (in sectors):

root@cl3-amd-node1:
fdisk -l /dev/sdb
Disk /dev/sdb: 1.8 TiB, 1998998994944 bytes, 3904294912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953260927 1953258880 931.4G 83 Linux
root@cl3-amd-node2:
fdisk -l /dev/md1
Disk /dev/md1: 931.4 GiB, 1000069595136 bytes, 1953260928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/md1p1       2048 1953260927 1953258880 931.4G 83 Linux

Now, we must have a direct network to each other of servers for drbd traffic, which will be very high. I use a bond of two gigabit network cards:

#cl3-amd-node1:
cat /etc/network/interfaces
auto bond0
iface bond0 inet static
        address  192.168.5.104
        netmask  255.255.255.0
        slaves eth2 eth1
        bond_miimon 100
        bond_mode balance-rr
#cl3-amd-node2:
cat /etc/network/interfaces
auto bond0
iface bond0 inet static
        address  192.168.5.108
        netmask  255.255.255.0
        slaves eth1 eth2
        bond_miimon 100
        bond_mode balance-rr

And we can test the speed of this network with package iperf:

apt-get install iperf

We start an iperf instance on one server by this command:

#cl3-amd-node2
iperf  -s -p 888

And from the other, we connect to this instance for 20 seconds:

#cl3-amd-node1
iperf -c 192.168.5.108 -p 888 -t 20
#and the conclusion
------------------------------------------------------------
Client connecting to 192.168.5.108, TCP port 888
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.5.104 port 49536 connected with 192.168.5.108 port 888
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  4.39 GBytes  1.88 Gbits/sec

So we can see, that I have a bonded network from two network cards and the resulting speed is almost 2Gbps.
Now, we can continue with installing and setting up the drbd resource.

apt-get install drbd-utils drbdmanage

All aspects of DRBD are controlled in its configuration file, /etc/drbd.conf. Normally, this configuration file is just a skeleton with the following contents:
include “/etc/drbd.d/global_common.conf”;
include “/etc/drbd.d/*.res”;
The simplest configuration is:

cat /etc/drbd.d/global_common.conf
global {
        usage-count yes;
}
common {
        net {
        protocol C;
        }
}

And the configuration of resource itself. It must be the same on both nodes:

root@cl3-amd-node1:/etc/drbd.d# cat /etc/drbd.d/r0.res
resource r0 {
disk {
        c-plan-ahead 15;
        c-fill-target 24M;
        c-min-rate 90M;
        c-max-rate 150M;
}
net {
        protocol C;
        allow-two-primaries yes;
        data-integrity-alg md5;
        verify-alg md5;
}
on cl3-amd-node1 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 192.168.5.104:7789;
        meta-disk internal;
}
on cl3-amd-node2 {
        device /dev/drbd0;
        disk /dev/md1p1;
        address 192.168.5.108:7789;
        meta-disk internal;
}
}
root@cl3-amd-node2:/etc/drbd.d# cat /etc/drbd.d/r0.res
resource r0 {
disk {
        c-plan-ahead 15;
        c-fill-target 24M;
        c-min-rate 90M;
        c-max-rate 150M;
}
net {
        protocol C;
        allow-two-primaries yes;
        data-integrity-alg md5;
        verify-alg md5;
}
on cl3-amd-node1 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 192.168.5.104:7789;
        meta-disk internal;
}
on cl3-amd-node2 {
        device /dev/drbd0;
        disk /dev/md1p1;
        address 192.168.5.108:7789;
        meta-disk internal;
}
}

Now, we must create and initialize backend devices for drbd, on both nodes:

drbdadm create-md r0
#answer yes to destroy possible data on devices

Now, we can start the drbd service, on both nodes:

root@cl3-amd-node2:/etc/drbd.d# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.
root@cl3-amd-node1:/etc/drbd.d# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.

Or we can start it on both nodes:

drbdadm up r0

And we can see it as inconsistent and both of them are secondary:

root@cl3-amd-node1:~# drbdadm status
r0 role:Secondary
  disk:Inconsistent
  cl3-amd-node2 role:Secondary
    peer-disk:Inconsistent

Start the initial full synchronization. This step must be performed on only one  node, only on initial resource configuration, and only on the node you selected as the synchronization source. To perform this step, issue this command:

root@cl3-amd-node1:# drbdadm primary --force r0

And we can see the status of our drbd storage:

root@cl3-amd-node2:~# drbdadm status
r0 role:Secondary
  disk:Inconsistent
  cl3-amd-node1 role:Primary
    replication:SyncTarget peer-disk:UpToDate done:3.10

After synchronization successfully finish, we set up our secondary server to be primary:

root@cl3-amd-node2:~# drbdadm status
r0 role:Secondary
  disk:UpToDate
  cl3-amd-node1 role:Primary
    peer-disk:UpToDate
root@cl3-amd-node2:~# drbdadm primary r0

And we can see status of this dual-primary (primary-primary) drbd storage resource:

root@cl3-amd-node2:~# drbdadm status
r0 role:Primary
  disk:UpToDate
  cl3-amd-node1 role:Primary
    peer-disk:UpToDate

Now we have a new block device on both servers:

root@cl3-amd-node2:~# fdisk -l /dev/drbd0
Disk /dev/drbd0: 931.4 GiB, 1000037986304 bytes, 1953199192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

We can configure this drbd block device as physical volume for lvm. This lvm is on top of this drbd. So, we can continue as it is a physical disk. Do it only on one server. The change will reflect on second server, due to primary-primary disk of drbd:

pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created

As we can see, we must adapt /etc/lvm/lvm.conf to our needs, because it scans all block devices and we can found duplicate entries:

root@cl3-amd-node2:~# pvs
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/drbd0 not /dev/md1p1
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/md1p1 not /dev/drbd0
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/drbd0 not /dev/md1p1
  PV         VG   Fmt  Attr PSize   PFree
  /dev/drbd0      lvm2 ---  931.36g 931.36g
  /dev/md0   pve  lvm2 a--  931.38g      0

So, we must edit filter option in this configuration.  Look at our resouce configuration r0.res. We must exlude our backend devices (/dev/sdb1 on one server and /dev/md1p1 on second server), or we can reject all devices and allow only specific. I prefer reject all and allow only what we want. So edit the filter variable.

root@cl3-amd-node1:~# cat /etc/lvm/lvm.conf | grep drbd
     filter =[ "a|/dev/drbd0|", "a|/dev/sda3|", "r|.*|" ]
root@cl3-amd-node2:~# cat /etc/lvm/lvm.conf | grep drbd
    filter =[ "a|/dev/drbd0|", "a|/dev/md0|", "r|.*|" ]

Now, we don’t see duplicates and  we can create a volume group. Only on one server:

root@cl3-amd-node2:~# vgcreate drbd0-vg /dev/drbd0
  Volume group "drbd0-vg" successfully created
...
root@cl3-amd-node2:~# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/drbd0 drbd0-vg lvm2 a--  931.36g 931.36g
  /dev/md0   pve      lvm2 a--  931.38g      0

And finally we add the LVM group to the proxmox. It can be done via web interface. So, go to proxmox web interface to Datacenter, click on storage and add (LVM).
Then create your ID (this is the name of your storage. It can not be changed later. Maybe: drbd0-vg),  next you will see the previously created volume group drbd0-vg. So select it and enable the sharing by click the ‘shared’ box.
Now, we can create virtual machine on this LVM and when we can migrate it without downtime from one server to another because of drbd. There is one shared storage. So when the migration starts, machine is started on another server and through ssh tunnel is migrate content of ram. And after few seconds, it is started.
Sometimes, after some circumstances with network disconnect and connect, there is split-brain detected. So if this happened, don’t panic. When this happened, both servers are marked as “standalone” and drbd storage started to diverge. From this time there happened different writes to both sides. We must one of this servers mark as victim, because one of these servers has the “right” data and the other has “wrong” data. So the only way is backup the running virtuals on the “victim” and then we must destroy/discard this data on drbd storage and synchronize it from other server, which has “right” data. So if this is happening, this is in logs:

root@cl3-amd-node1:~# dmesg | grep -i brain
[499210.096185] drbd r0/0 drbd0 cl3-amd-node1: helper command: /sbin/drbdadm initial-split-brain
[499210.097306] drbd r0/0 drbd0 cl3-amd-node1: helper command: /sbin/drbdadm initial-split-brain exit code 0 (0x0)
[499210.097313] drbd r0/0 drbd0: Split-Brain detected but unresolved, dropping connection!

We must manually solve this problem. So I choose as victim: cl3-amd-node1. We must set this node as secondary:

drbdadm secondary r0

And now, we must disconnect it and connect it back with marking data to be discarded.

root@cl3-amd-node1:~# drbdadm connect --discard-my-data r0

And after synchronization, mark it back to primary node:

root@cl3-amd-node1:~# drbdadm primary r0

And in log, we can see:

cl3-amd-node1 kernel: [246882.068518] drbd r0/0 drbd0: Split-Brain detected, manually solved. Sync from peer node

Have fun.
 

Total Page Visits: 153400 - Today Page Visits: 67