Install WordPress on Centos-8-stream with apache (httpd)

I started on clean centos-8 server, created from netinstall cd. It is minimal instalation. So, lets begun. Check the version, to be installed:

dnf info httpd
Name         : httpd
 Version      : 2.4.37
 Release      : 11.module_el8.0.0+172+85fc1f40

So, let install it and allow http port on firewalld. And start apache server itself.

dnf install httpd
firewall-cmd --add-service=http --permanent
firewall-cmd --reload
systemctl start httpd.service
systemctl enable httpd.service

Now, you can point you web browser to IP on this server and you should see the welcome page of apache web server on centos.

Now create a directory, where we place our content and simple web page to test, if its working.

mkdir -p /var/www/vhosts/com.example.www
vim /var/www/vhosts/com.example.www/index.html
<html>
  <head>
    <title>Welcome to www.example.com!</title>
  </head>
  <body>
    <h1>Success!  The www.example.com virtual host is working!</h1>
  </body>
</html>

And now, create for this page own configuration in httpd:

vim /etc/httpd/conf.d/com.example.www.conf
<VirtualHost *:80>
    ServerAdmin admin@example.com
    DocumentRoot "/var/www/vhosts/com.example.www"
    ServerName www.example.com

ErrorLog /var/log/httpd/com.example.www-error_log
CustomLog /var/log/httpd/com.example.www-access_log common
</VirtualHost>

And now, gracefully restart your web server and point your browser to you domain: www.example.com (I edit my /etc/hosts to point this domain at my internal IP).

apachectl graceful

If you test page is working, lets begin with more thinks. We must install additional packages (software) for wordpress. Its mysql server and php. As mysql server, I use mariadb. Then create an initial configuration for mysql and create database for wordpress. I set no password for mysql.

dnf install mariadb-server mariadb
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation
   Set root password? [Y/n] n
   Remove anonymous users? [Y/n] y
   Disallow root login remotely? [Y/n] y
   Remove test database and access to it? [Y/n] y
   Reload privilege tables now? [Y/n] y

mysql -u root -p
   CREATE DATABASE wordpress;
   CREATE USER wordpressuser@localhost IDENTIFIED BY 'BESTpassword';
   GRANT ALL PRIVILEGES ON wordpress.* TO wordpressuser@localhost IDENTIFIED BY 'BESTpassword';
   FLUSH PRIVILEGES;
   exit;

When we find, which version of php will be standard installed, I decided to use another package sources and install newer php version 7.3

dnf info php
 Available Packages
 Name         : php
 Version      : 7.2.11

dnf install http://rpms.remirepo.net/enterprise/remi-release-8.rpm
dnf update
dnf install php73
dnf install php73-php-fpm.x86_64 php73-php-mysqlnd.x86_64
systemctl start php73-php-fpm.service
systemctl enable php73-php-fpm.service
ln -s /usr/bin/php73 /usr/bin/php
php -v
   PHP 7.3.10 (cli) (built: Sep 24 2019 09:20:18) ( NTS )

Now, create simple test php page, to view php by apache if its working.

vim /var/www/vhosts/com.example.www/foo.php
<?php
  phpinfo();
?>

Restart apache web server and point your browser to php:

systemctl restart httpd.service
www.example.com/foo.php

And now you can see informationa page about php on system.

Now we can download wordpress and unpack it.

cd ~ 
wget http://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
rsync -avP wordpress/ /var/www/vhosts/com.example.www/
chown -R apache:apache /var/www/vhosts/

Now, we edit configuration and add directory variables about default loding index.php. And remove test files – foo.php, index.html.

rm /var/www/vhosts/com.example.www/foo.php
rm /var/www/vhosts/com.example.www/index.html
vim /etc/httpd/conf.d/com.example.www.conf
<Directory /var/www/vhosts/com.example.www>
DirectoryIndex index.php
</Directory>

And restart apache web server

systemctl restart httpd.service

Now we can continue with setting our wordpress via web browser and our www.example.com page (click refresh in your web browser). Follow the instructions and fill your variables (database name, user, password…).

My installation step 2 tells me, that it cannot write config.php in our content directory. So, I can manually creaty config.php, or find out, what happens. Install selinux troubleshoot packages and run command sealert, which tell us what happend.

dnf install setroubleshoot
sealert -a /var/log/audit/audit.log

I can see this messages:

SELinux is preventing /opt/remi/php73/root/usr/sbin/php-fpm from write access on the directory com.example.www.
If you want to allow php-fpm to have write access on the com.example.www directory
Then you need to change the label on 'com.example.www'
Do
# semanage fcontext -a -t httpd_sys_rw_content_t 'com.example.www'
# restorecon -v 'com.example.www'
Additional Information:
Source Context                system_u:system_r:httpd_t:s0
Target Context                unconfined_u:object_r:httpd_sys_content_t:s0
Target Objects                com.example.www [ dir ]

So I do, what it want. I adapt permissions, that apache/php can write into this diretory.

semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/vhosts/com.example.www(/.*)?'
restorecon -Rv /var/www/vhosts/com.example.www/

Now I can continue with installation. And everything works fine. Have a nice day.

Hardening iptables from “ACCEPT all” to “DROP all”

Now I write some rules, for hardening iptables. From default policy “accept” everything to “drop” everything except something I want to accept. This setup was made on Server Ubuntu 18.04.2 LTS.

This post is related to and made from sites:

https://help.ubuntu.com/community/IptablesHowTo

https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-iptables-on-ubuntu-14-0

By default, we can see, that everything is allowed:

iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination

So we start with allowing established sessions to receive traffic:

iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

-A INPUT: The -A flag appends a rule to the end of a chain. This is the portion of the command that tells iptables that we wish to add a new rule, that we want that rule added to the end of the chain, and that the chain we want to operate on is the INPUT chain.

And now, we can allow specific port or service, which we want to allow:

iptables -A INPUT -p tcp --dport ssh -j ACCEPT
iptables -A INPUT -p tcp --dport http -j ACCEPT
iptables -A INPUT -p tcp --dport https -j ACCEPT

And now, we block everything else commint to us:

iptables -A INPUT -j DROP

Now we can see our input chain in firewall:

iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:https
DROP all -- anywhere anywhere

Now we must add some rule for loopback. because we block it now. If we add it right now with above command, we add it at the end of chain (after drop all). So all traffic will be blocked. We must add it at the begining of this chain:

iptables -I INPUT 1 -i lo -j ACCEPT

-I INPUT 1: The -I flag tells iptables to insert a rule. This is different than the -A flag which appends a rule to the end. The -I flag takes a chain and the rule position where you want to insert the new rule.

-i lo: This component of the rule matches if the interface that the packet is using is the “lo” interface. The “lo” interface is another name for the loopback device. This means that any packet using that interface to communicate (packets generated on our server, for our server) should be accepted.

And now we can see it:

iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:https
DROP all -- anywhere anywhere

The first and the last lines looks very similar, so use the variable -v (verbose) os -S (list rules). See

iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo any anywhere anywhere
287 46814 ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh
0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http
0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https
211 45230 DROP all -- any any anywhere anywhere
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -j DROP

Now we have five rules to ACCEPT packets, which we want. The we have the sixth rule for DROP all another packets.

The policy DROP everything can be done by two ways. We have the first way (Default policy of chain is ACCEPT everything. Our five rules catch certain packets and at the end we have the sixth rule to DROP all packet which catch all other remain packets). In case of breaking firewall, or accidentally flush our rules, we still can connect to our server (by default chain policy ACCEPT).

The second way is set default chain policy to DROP, and set our five rules first. So if packets are catch by one of this rules, is ACCEPTed. Then it is DROPPEd by default. There is a possibility, that if we flush our firewall rules, we never reach our server from network because the default chain policy is DROP. So first, we need the rules like above mentioned except the DROP rule. And then, at the end, change the default chain policy by command:

iptables -P INPUT DROP

And now look at this way of firewall:

iptables -S
-P INPUT DROP
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

So we can see, that we DROP all packet, we want and ACCEPT packets we want. It can be done by this two ways. So pick one, which you want. I prefer the second way, because I have another access to server (via console-keyboard connected directly to server). So if something go wrong, I am still be able to connect it.

So if you choose the first way, you must add others rules before the DROP rule, because it will be matched by this rule. Like the loopback rule, you must insert it somewhere before the DROP rules. See the lines:

iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere
2 ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
3 ACCEPT tcp -- anywhere anywhere tcp dpt:http
4 ACCEPT tcp -- anywhere anywhere tcp dpt:https
5 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
6 DROP all -- anywhere anywhere

And now we can add another rule somewhere in the middle:

iptables -I INPUT 6 -p tcp --dport 5666 -j ACCEPT

And we see it:

iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere
2 ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
3 ACCEPT tcp -- anywhere anywhere tcp dpt:http
4 ACCEPT tcp -- anywhere anywhere tcp dpt:https
5 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
6 ACCEPT tcp -- anywhere anywhere tcp dpt:nrpe
7 DROP all -- anywhere anywhere

For save this rules and set it persistant after reboot, I use package:

apt-get install iptables-persistent

During installation you will be asked for some questions, like save this rules for permanent use and load next boot. If you haven’t yet, never mind. You can do it later with this:

iptables-save -c > /etc/iptables/rules.v4

How to use Apache as Reverse Proxy on Centos 7 with selinux

Introduction

In addition to being a “basic” web server, and providing static and dynamic content to end-users, Apache httpd (as well as most other web servers) can also act as a reverse proxy server, also-known-as a “gateway” server.

In such scenarios, httpd itself does not generate or host the data, but rather the content is obtained by one or several backend servers, which normally have no direct connection to the external network. As httpd receives a request from a client, the request itself is proxied to one of these backend servers, which then handles the request, generates the content and then sends this content back to httpd, which then generates the actual HTTP response back to the client.

There are numerous reasons for such an implementation, but generally the typical rationales are due to security, high-availability, load-balancing and centralized authentication/authorization.

It is critical in these implementations that the layout, design and architecture of the backend infrastructure (those servers which actually handle the requests) are insulated and protected from the outside; as far as the client is concerned, the reverse proxy server is the sole source of all content.

More is here.

Typical implemetation is below:

In this tutorial, we will set up Apache as a basic reverse proxy using the mod_proxy extension to redirect incoming connections to one or several backend servers running on the same network. This Apache Proxy Server also creates and manages security (ssl engine, https). Conection to the backend servers from this Proxy Server is not encrypted (only http). Next, we will use https (ssl certificates from Let’s Encrypt for ours conections from outside world, but not to backend.

Installation

For a minimum HTTP server instalation install apache itself:

yum install httpd -y

Make sure, that the “/etc/hosts” file contain references for the loopback address and the hostname

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.3.3 edge-proxy-e edge-proxy-e.gonscak.sk

Turn on the HTTP server, and make sure it starts automatically on reboot. Next, add http port to the firewalld.

systemctl start httpd.service
systemctl enable httpd.service 
firewall-cmd --add-service=http --permanent
firewall-cmd --reload

Now, we can test our apache test web page on http address. This page is there fer testing and informational purposes:

http://edge-proxy-e.gonscak.sk

If you see the test page above, then your server is now correctly installed.

Example – Reverse Proxying a Single Backend Server

Create a first configuration file for our test backend server (I assume, that you already have one).

vim /etc/httpd/conf.d/test-vhost.conf

<VirtualHost *:80>
    ServerName edge-proxy-e.gonscak.sk
    ProxyPreserveHost On
    ProxyPass / http://media.gonscak.sk/
    ProxyPassReverse / http://media.gonscak.sk/
</VirtualHost>

There are three directives here:

  • ProxyPreserveHost makes Apache pass the original Host header to the backend server. This is useful, as it makes the backend server aware of the address used to access the application.
  • ProxyPass is the main proxy configuration directive. In this case, it specifies that everything under the root URL (/) should be mapped to the backend server at the given address. For example, if Apache gets a request for /example, it will connect to http://media.gonscak.sk/example and return the response to the original client.
  • ProxyPassReverse should have the same configuration as ProxyPass. It tells Apache to modify the response headers from backend server. This makes sure that if the backend server returns a location redirect header, the client’s browser will be redirected to the proxy address and not the backend server address, which would not work as intended.

Now, we can test out configuration with the first command below. It runs a configuration file syntax test and report OK or error. And with second command we gracefully restarts Apache httpd daemon. If the daemon is not running, it is not started. Currently open connections are not aborted:

apachectl configtest
apachectl graceful

And now, if everything is OK, we can open out web page now (http://192.168.3.3). We now not see the default page of apache, but the content of backend server media.gonscak.sk. We are not connected directly to the media.gonscak.sk, but only to the “edge” server with Apache.

Enabling SSL support, set certificates from LetsEcnrypt

First, we must install package mod_ssl for Apache to support SSL:

yum install mod_ssl.x86_64

Now, we must open port 443 for Apache in firewall:

firewall-cmd --add-service=https --permanent
firewall-cmd --reload

Now, we create o text file, where we set up some directives for vhost. And then we can simple change som SSL directives for all vhosts in Apache. I use some Mozilla recommendations via https://mozilla.github.io/server-side-tls/ssl-config-generator:

    SSLEngine on
    	SSLCertificateFile /etc/pki/tls/certs/newclient.crt
    	SSLCertificateKeyFile /etc/pki/tls/private/newclient.key
    	SSLCACertificateFile /etc/pki/tls/certs/ca.crt
    Header always set Strict-Transport-Security "max-age=15768000"

SSLProtocol             all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite          ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256

SSLHonorCipherOrder     on
SSLCompression          off

Next, I create an empty directory for DocumentRoot. There will be no content:

mkdir -p /var/www/vhosts/sk.gonscak.media

I edit config file for “/etc/httpd/conf.d/test-vhost.conf” and add virtualhost for ssl. And add link for log files.

<VirtualHost *:80>
    ServerAdmin webmaster@gonscak.sk
    ServerName edge-proxy-e.gonscak.sk
    AddDefaultCharset UTF-8
    RedirectPermanent / https://edge-proxy-e.gonscak.sk/
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin webmaster@gonscak.sk
    DocumentRoot "/var/www/vhosts/sk.gonscak.media"
    AddDefaultCharset UTF-8
    ServerName edge-proxy-e.gonscak.sk

    ErrorLog /var/log/httpd/sk.gonscak.media-error_log
    CustomLog /var/log/httpd/sk.gonscak.media-access_log common
    Include	/etc/httpd/conf.d/modern-ssl-template.txt

  <IfModule mod_proxy.c>
   ProxyRequests Off
   ProxyPass /.well-known/ !
   ProxyPass / http://media.gonscak.sk/
   ProxyPassReverse / http://media.gonscak.sk/
   SSLProxyEngine Off
   ProxyPreserveHost Off
  </IfModule>
</VirtualHost>

Now, I hide some information, which world can get from our Apache server. Add this directives to Apache configuration. Detailes can be read here.

vim /etc/httpd/conf/httpd.conf
ServerSignature Off
ServerTokens Prod

Some nice explanations of Proxy and WordPress behind it is here: https://community.pivotal.io/s/article/Purpose-of-the-X-Forwarded-Proto-HTTP-Header

 

Encrypted LVM partition on software raid-1 with mdadm

At another post https://www.gonscak.sk/?p=201 I posted how to create raid1 software raid with mdadm in linux. Now I tried to add a crypted filesystem to this.

First, check, that we have working software raid:

sudo mdadm --misc --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Wed Aug 22 09:34:23 2018
        Raid Level : raid1
        Array Size : 1953381440 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
     Intent Bitmap : Internal
       Update Time : Thu Aug 23 14:18:50 2018
             State : active 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : bitmap
              Name : gw36:0  (local to host gw36)
              UUID : ded4f30e:1cfb20cb:c10b843e:df19a8ff
            Events : 3481
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Now, we synced drives and clean. It is time to encrypt.  If we have not loaded modules for encryption, load it:q

modprobe dm-crypt

Now create the volume with passphrase:

sudo cryptsetup --cipher=aes-xts-plain --verify-passphrase --key-size=512 luksFormat /dev/md0

And we can open it:

sudo cryptsetup  luksOpen /dev/md0 cryptdisk

Now we can create as many times a physical volume, volume group and logical volume.

sudo pvcreate /dev/mapper/cryptdisk
sudo vgcreate raid1 /dev/mapper/cryptdisk
sudo lvcreate --size 500G --name lv-home raid1

sudo pvs
  PV                     VG        Fmt  Attr PSize    PFree
  /dev/mapper/cryptdisk  raid1     lvm2 a--    <1,82t 1,33t
sudo vgs
  VG        #PV #LV #SN Attr   VSize    VFree
  raid1       1   1   0 wz--n-   <1,82t 1,33t
sudo lvs
  LV      VG        Attr       LSize
  lv-home raid1     -wi-ao---- 500,00g            

Next, we create a filesystem on this logical volume:

sudo mkfs.ext4 /dev/mapper/raid1-lv--home

And we can mount it:

sudo mount /dev/mapper/raid1-lv--home crypt-home/

Now we have an encrypted partition (disk) for our home directory.

How to resize Physical volume and shrink disk partition

I Installed proxmox environment on Intel 240GB SSD. Installation take the whole disk for lvm. So I need to reduce the used space and create a new partition for drbd.
This is my disk. You can see, that the disk is full allocated with 171G free.

root@pve1:/# gdisk -l /dev/sda
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Number Start (sector) End (sector) Size Code Name
 1      34          2047        1007.0  KiB EF02
 2      2048        262143      127.0   MiB EF00
 3      262144      111411199   53.0    GiB 8E00 Linux LVM
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g
root@pve1:/# vgs
 VG #PV #LV #SN Attr VSize VFree
 pve 1 3 0 wz--n- 223.44g 171.44g
root@pve1:/# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 data pve -wi-ao--- 40.00g
 root pve -wi-ao--- 10.00g
 swap pve -wi-ao--- 2.00g

So, we list our logical volumes with segments on physical volume /dev/sda3:

root@pve1:/# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 3072 10240 data 0 linear /dev/sda3:3072-13311
 /dev/sda3 pve lvm2 a-- 223.44g 171.44g 13312 43889 0 free

We can see, the size of PV is 223,44G and we have free 171,44G. So, we must shrink this physical volume about 171,44G. So compute the space for size of this physical volume: 223,44 – 171,44 = 52G. So, our PV must have at least 52G. Next, we resize this pv:

root@pve1:/# pvresize --setphysicalvolumesize 52G /dev/sda3
 /dev/sda3: cannot resize to 13311 extents as 13312 are allocated.
 0 physical volume(s) resized / 1 physical volume(s) not resized
root@pve1:/# pvresize --setphysicalvolumesize 52.1G /dev/sda3
 Physical volume "/dev/sda3" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized

As we can see, we cannost shrink exact to this space. So we add 100M and use the 52,1G size. Now we can see:

root@pve1:/# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 3072 10240 data 0 linear /dev/sda3:3072-13311
 /dev/sda3 pve lvm2 a-- 52.10g 100.00m 13312 25 0 free

At this point, we must work on the lowest layer of disk, so we must delete this partition and create a new one. The new partition must start on the same sector as previous and the last sector must be after last segment of physical volume. I use gdisk, because my disk have GPT partition table:

root@pve1:/# gdisk /dev/sda
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
First usable sector is 34, last usable sector is 468862094
Number Start (sector) End (sector) Size Code Name
 1 34 2047 1007.0 KiB EF02
 2 2048 262143 127.0 MiB EF00
 3 262144 468862094 223.4 GiB 8E00
Command (? for help): d
Partition number (1-3): 3
Command (? for help): n
Partition number (3-128, default 3):
First sector (262144-468862094, default = 262144) or {+-}size{KMGTP}:
Last sector (262144-468862094, default = 468862094) or {+-}size{KMGTP}: +53G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8E00
Changed type of partition to 'Linux LVM'
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Total free space is 357450895 sectors (170.4 GiB)
Number Start (sector) End (sector) Size         Code Name
 1     34             2047         1007.0 KiB   EF02
 2     2048           262143       127.0 MiB    EF00
 3     262144         111411199    53.0 GiB     8E00 Linux LVM
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

Now, we must reboot our computer to use new partition table. And after reboot, use this command to resize physical volume on partition /dev/sda3

root@pve1:/# pvresize /dev/sda3
 Physical volume "/dev/sda3" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 53.00g 1020.00m

Now, if we can use all of free space for the logical volume “data”, we can resize it to whole free space, like this:

root@pve1:/# lvresize /dev/pve/data -l +100%FREE
 Extending logical volume data to 41.00 GiB
 Logical volume data successfully resized
 root@pve1:/# lvs
 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
 data pve -wi-ao--- 41.00g
 root pve -wi-ao--- 10.00g
 swap pve -wi-ao--- 2.00g
root@pve1:/# pvs
 PV VG Fmt Attr PSize PFree
 /dev/sda3 pve lvm2 a-- 53.00g 0

Now, we can create a new partition at the end of disk:

gdisk /dev/sda
Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Number Start (sector) End (sector) Size Code Name
 1 34 2047 1007.0 KiB EF02
 2 2048 262143 127.0 MiB EF00
 3 262144 111411199 53.0 GiB 8E00 Linux LVM
Command (? for help): n
Partition number (4-128, default 4):
First sector (111411200-468862094, default = 111411200) or {+-}size{KMGTP}:
Last sector (111411200-468862094, default = 468862094) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
root@pve1:~# gdisk -l /dev/sda
Number Start (sector) End (sector) Size         Code Name
 1     34             2047         1007.0 KiB   EF02
 2     2048           262143       127.0 MiB    EF00
 3     262144         111411199    53.0 GiB     8E00 Linux LVM
 4     111411200      468862094    170.4 GiB    8300 Linux filesystem

And if we list details about physical volume, we can see, that there is no free space:

root@pve1:~# pvs -v --segments /dev/sda3
 Using physical volume(s) on command line
 PV VG Fmt Attr PSize PFree Start SSize LV Start Type PE Ranges
 /dev/sda3 pve lvm2 a-- 53.00g 0 0 512 swap 0 linear /dev/sda3:0-511
 /dev/sda3 pve lvm2 a-- 53.00g 0 512 2560 root 0 linear /dev/sda3:512-3071
 /dev/sda3 pve lvm2 a-- 53.00g 0 3072 10495 data 0 linear /dev/sda3:3072-13566

And what is drbd, you can see in another post on my page. Have a fun.

How to create software raid 10 with mdadm

RAID 10, also called as RAID 1+0 is a stripe of mirrors. It require  four disks at least. It stripes data across mirrored pairs. So, as long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost, because there is no parity.

Raid 10 provides redundancy and performance despite of 50% capacity of disks.
Note on why to use different manufacturers disks: Disks will fail, this is not a matter of a “if” but a “when”. Disks of the same manufacturer and the same model have similar properties, and so, higher chances of failing together under the same conditions and time of use. The suggestion so is to use disks from different manufacturers, different models and, in special, that do not belong to the same batch (consider buying from different stores if you are buying disks of the same manufacturer and model). This is not uncommon that a second disk fail happen during a resotre after a disk replacement when disks of the same batch are used. You certainly don’t want this to happen to you.
So we have four disk fo this: /dev/sdc, /dev/sdd, /dev/sde, /dev/sdf. At first, we check, if there is any previous md superblock. So we examine this disks:

 mdadm -E /dev/sd[c-f]
/dev/sdc:
 MBR Magic : aa55
/dev/sdd:
 MBR Magic : aa55
/dev/sde:
 MBR Magic : aa55
/dev/sdf:
 MBR Magic : aa55

Now, we must clear this mbr (512b):

dd if=/dev/zero of=/dev/sdc bs=512 count=1
512 bytes copied, 0.000379187 s, 1.4 MB/s
dd if=/dev/zero of=/dev/sdd bs=512 count=1
512 bytes copied, 0.000251414 s, 2.0 MB/s
dd if=/dev/zero of=/dev/sde bs=512 count=1
512 bytes copied, 0.000487665 s, 1.0 MB/s
dd if=/dev/zero of=/dev/sdf bs=512 count=1
512 bytes copied, 0.000436107 s, 1.2 MB/s

And now, we can see, that there is no superblock:

mdadm -E /dev/sd[c-f]
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
mdadm: No md superblock detected on /dev/sde.
mdadm: No md superblock detected on /dev/sdf.

Now, we must create a partitions with the same size. Disks from different manufacturers (or even different models of the “same” capacity from the same manufacturer) don’t necessarily have the exact same disk size. And in future, we can replace failed disk with another disk (maybe a bigger), but we must create partition with the same size.
So, list the disks size:

fdisk -l /dev/sd[c-f]
Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdf: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

We can create partitions with fdisk command. Create a new primary partition with the same sectors:

fdisk -l /dev/sd[c-f]
Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
/dev/sdc1 2048 976773167 976771120 465.8G 83 Linux
Disk /dev/sdd: 465.8 GiB, 500107862016 bytes, 976773168 sectors
/dev/sdd1 2048 976773167 976771120 465.8G 83 Linux
Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
/dev/sde1 2048 976773167 976771120 465.8G 83 Linux
Disk /dev/sdf: 465.8 GiB, 500107862016 bytes, 976773168 sectors
/dev/sdf1 2048 976773167 976771120 465.8G 83 Linux

For sure, check, if there is no magic block in partitions:

mdadm -E /dev/sd[c-f]1
mdadm: No md superblock detected on /dev/sdc1.
/dev/sdd1:
 MBR Magic : aa55
Partition[0] : 1836016416 sectors at 1936269394 (type 4f)
Partition[1] : 544437093 sectors at 1917848077 (type 73)
Partition[2] : 544175136 sectors at 1818575915 (type 2b)
Partition[3] : 54974 sectors at 2844524554 (type 61)
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sdf1.

So, clear this superblock:

dd if=/dev/zero of=/dev/sdd1 bs=512 count=1
512 bytes copied, 0.000261033 s, 2.0 MB/s

And check for the last time:

mdadm -E /dev/sd[c-f]1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sdf1.

And finally we create a raid array:

mdadm --create /dev/md1 --level=10 --raid-devices=4 /dev/sd[c-f]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

Check the status of initial synchronization:

cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid10 sdf1[3] sde1[2] sdd1[1] sdc1[0]
 976508928 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
 [>....................] resync = 0.2% (2810176/976508928) finish=138.5min speed=117090K/sec
 bitmap: 8/8 pages [32KB], 65536KB chunk

 

disk cloning with dd

How to create a disk or usb image, and compress it on the fly? And how to restore it?
I have own operating system on USB key. To create a full-backup and then possible restore to another device, I use linux command dd (dd – convert and copy a file).
Now, we must determine, on which patch we have s source disk. I my case, it is

sudo fdisk -l /dev/sdb
Disk /dev/sdb: 29,5 GiB

First, I install additional  software for monitoring and best compressing on more cores

sudo apt-get install pigz pv

Then, I create a full copy of the usb key. Without compression it takes 30GB, with compression, it take only 3GB. With command “pv” we can watch progress. Pigz compress the source image with multiple threads and cores. With parameter -c it writes all processed output to stdout. So  with operand “>” we write this pigz output to a file:

sudo dd if=/dev/sdb | pv | pigz -c > /home/vasil/Documents/corsair-work.dd.gz

Then, I remove the source usb key and insert new one. It also has a path /dev/sdb. Now, I restore it with this command:

pigz -cdk Documents/corsair-work.dd.gz |pv| sudo dd of=/dev/sdb bs=4M

Parameter -c also write output to stdout and program dd writes it to disk. Parameter -k menas, that keep original file after decompress. And parameter -d means decompress.
Now, we can boot system with new usb key. And this image is identical as the source.
I hope, that this help someone. Have a nice day.

Bareos on Centos 7 – powerful backup tool

Today I met with backup problem. I nee to find and set up solution for backup and possible restore of files in windows or linux. I heard about bacula, but after som searching and reading, I choose a new fork of bacula – bareos.

Installing Bareos itself

So I install it on new, clean vm centos 7. At first define a hostname:

hostnamectl set-hostname bareos-ba

Next, add a bareos repository:

cd /etc/yum.repos.d/
wget http://download.bareos.org/bareos/release/latest/CentOS_7/bareos.repo
yum install bareos -y

Next, we can use MariaDB-server for backend od bareos:

yum install mariadb-server -y
systemctl start mariadb.service
systemctl enable mariadb.servic

Now, we create and mount a file-storage, when bareos will save the data:

fdisk /dev/vda
...
mkfs.xfs /dev/vda1
mkdir /var/backups
mount /dev/vda1 /var/backups/
chown bareos:bareos -R /var/backups/
df -h
...
Filesystem                  Size  Used Avail Use% Mounted on
/dev/vda1                    32G   33M   32G   1% /var/backups

Edit /etc/fstab to make this mount permanent.
Now, we can create a new bareos database with pre-defined scripts:

[root@bareos-ba]#/usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
Creating of bareos database succeeded.
[root@bareos-ba]# /usr/lib/bareos/scripts/make_bareos_tables
Making mysql tables
Creation of Bareos MySQL tables succeeded.
[root@bareos-ba]# /usr/lib/bareos/scripts/grant_bareos_privileges
Granting mysql tables
Privileges for user bareos granted ON database bareos.

Now, we can check our default configuration with:

su bareos -s /bin/sh -c "/usr/sbin/bareos-dir -t"
su bareos -s /bin/sh -c "/usr/sbin/bareos-sd -t"
bareos-fd -t

If you are using firewall, for bareos server open this ports:

firewall-cmd --zone=public --add-port=9101/tcp --permanent
firewall-cmd --zone=public --add-port=9102/tcp --permanent
firewall-cmd --zone=public --add-port=9103/tcp --permanent
#http only if you want web-gui for baores
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload
firewall-cmd --list-all
#public (active)
# - services: http ssh
# - ports: 5666/tcp 9103/tcp 9101/tcp 161/udp 9102/tcp

This step is only for bareos WebUI. If you don’t need this, skip it.

yum install bareos-webui -y
setsebool -P httpd_can_network_connect on
systemctl start httpd.service
systemctl enable httpd.service

Edit conf file and set FQDN for this host:

vim /etc/bareos-webui/directors.ini
- diraddress = "bareos-ba.example.com"

Copy example admin console config:

cp /etc/bareos/bareos-dir.d/console/admin.conf.example /etc/bareos/bareos-dir.d/console/admin.conf
chown bareos:bareos /etc/bareos/bareos-dir.d/console/admin.conf

Setting up a storage for bareos director

At first, we must add our previously created and mounted disk to our bareos-storage daemon and then add it to bareos-director daemon for using it and working.

cp /etc/bareos/bareos-sd.d/device/FileStorage.conf /etc/bareos/bareos-sd.d/device/backups.conf
chown bareos:bareos /etc/bareos/bareos-sd.d/device/backups.conf
vim /etc/bareos/bareos-sd.d/device/backups.conf
 - change archive device and the name:
Archive Device = /var/backups
Name = Backups
cp /etc/bareos/bareos-dir.d/storage/File.conf /etc/bareos/bareos-dir.d/storage/backups.conf
chown bareos:bareos /etc/bareos/bareos-dir.d/storage/backups.conf
vim /etc/bareos/bareos-dir.d/storage/backups.conf
 - change Name and Device. Name must be the same as above:
Name = Backups
Device = Backups

Now we edit job definitions:

vim /etc/bareos/bareos-dir.d/jobdefs/DefaultJob.conf
 - change Storage variable to ours above mentioned:
Storage = Backups

Now again check bareos config files for error:

su bareos -s /bin/sh -c "/usr/sbin/bareos-dir -t"
su bareos -s /bin/sh -c "/usr/sbin/bareos-sd -t"
bareos-fd -t

and restart (start) bareos:

service bareos-dir restart
service bareos-sd restart
service bareos-fd restart
systemctl enable bareos-dir.service
systemctl enable bareos-sd.service
systemctl enable bareos-fd.service

Using bconsole and WEBui

Our webui is on address bellow. Default login nad pass is: admin/admin

http://bareos-ba.globesy.sk/bareos-webui/

Our bareos console is avalaible via command bconsole:

[root@bareos-ba ~]# bconsole
Connecting to Director localhost:9101
1000 OK: bareos-dir Version: 16.2.4 (01 July 2016)
Enter a period to cancel a command.
*

bconsole is marked at the beginning with asterisk *
Some useful commands:

list storages
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
+-----------+---------+-------------+
| StorageId | Name    | AutoChanger |
+-----------+---------+-------------+
|         1 | File    |           0 |
|         2 | Backups |           0 |
+-----------+---------+-------------+
list pools
show jobdefs
show filesets
status dir
status client=bareos-fd

Now we can start our first job – Selftest. So, run bconsole and continue:

bconsole
*run
- select job resource 3: backup-bareos-fs
- yes => Job queued. JobId=1
*wait jobid=1
*messages
quit

In messages we can see, that bareos backup almost 44MB of files. In our fileset of this Selftest, we can see bareos backup folder /usb/sbin:

cat /etc/bareos/bareos-dir.d/fileset/SelfTest.conf

Now, we can restore this files. By default job of restore, it will be restored to /tmp/bareos-restores:

 cat /etc/bareos/bareos-dir.d/job/RestoreFiles.conf

Run bconsole:

*restore all client=bareos-fd
- select 5 for most recent backup
- done
- yes
Job queued. JobId=2
*wait jobid=2
*messages
..

We can see our restored files in /tmp/bareos-restores/.
 

How to install nextcloud on centos 7 minimal

At first, please update your centos. Every command I use, is used as root 😉

yum -y update

Installing database server MariaDB

Next, we install and create empty database for our nextcloud. Then we start it and enable for autostart after boot.
If you wish, you can skip installations of MariaDB and you can use built-in SQLite. Then you can continue with installing apache web server.

yum -y install mariadb mariadb-server
...
systemctl start mariadb
systemctl enable mariadb

Now, we run post installation script to finish setting up mariaDB server:

mysql_secure_installation
...
Enter current password for root (enter for none): ENTER
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

Now, we can create a database for nextcloud.

mysql -u root -p
...
CREATE DATABASE nextcloud;
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
FLUSH PRIVILEGES;
exit;

Installing Apache Web Server with ssl (letsencrypt)

Now, we install Apache web server, and we start it and enable for autostart after boot:

yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

Now, we install ssl for apache and allow https service for firewall:

yum -y install epel-release
yum -y install httpd mod_ssl
...
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --reload
systemctl restart httpd.service
systemctl status httpd

Now we can access our server via https://out.server.sk
If we want signed certificate from letsencrypt, we can do it with next commands. Certboot will ask some questions, so answer them.

yum -y install python-certbot-apache
certbot --apache -d example.com

If we are good, we can see:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/example.com/fullchain.pem.
...

And we can test our page with this:

https://www.ssllabs.com/ssltest/analyze.html?d=example.com&latest

Install PHP 7

As creators of nextcloud recommends at minimal PHP 5.4, I use php 7.
PHP 5.4 has been end-of-life since September 2015 and is no longer supported by the PHP team. RHEL 7 still ships with PHP 5.4, and Red Hat supports it. Nextcloud also supports PHP 5.4, so upgrading is not required. However, it is highly recommended to upgrade to PHP 5.5+ for best security and performance.
Now we must add some additional repositories:

rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

And we can install php 7.2:

yum install mod_php72w.x86_64 php72w-common.x86_64 php72w-gd.x86_64 php72w-intl.x86_64 php72w-mysql.x86_64 php72w-xml.x86_64 php72w-mbstring.x86_64 php72w-cli.x86_64 php72w-process.x86_64

Check in:

php --ini |grep Loaded
Loaded Configuration File:         /etc/php.ini
php -v
PHP 7.2.22 (cli) (built: Sep 11 2019 18:11:52) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies

In my case, I will use nextcloud as my backup device, so I increase the default upload limit to 200MB.

sed -i "s/post_max_size = 8M/post_max_size = 200M/" /etc/php.ini
sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 200M/" /etc/php.ini
sed -i "s/memory_limit = 128M/memory_limit = 512M/" /etc/php.ini

Restart web server:

systemctl restart httpd

Installing Nextcloud

At first, I install wget tool for download and unzip:

 yum -y install wget unzip

Now we can download nextcloud (at this time the latest version is 16.0.4). And extract it from archive to final destination. Then we change ownership of this directory:

wget https://download.nextcloud.com/server/releases/nextcloud-16.0.4.zip
...
unzip nextcloud-16.0.4.zip -d /var/www/html/
...
chown -R apache:apache /var/www/html/nextcloud/

Check, if you have enabled SELinux by command sestatus:

sestatus 

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31

Refer to nextcloud admin manual, you can run into permissions problems. Run these commands as root to adjust permissions:

semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'
restorecon -Rv '/var/www/html/nextcloud/'

If you see error “-bash: semanage: command not found”, install packages:

yum provides /usr/sbin/semanage
yum install policycoreutils-python-2.5-33.el7.x86_64

And finally, we can access our nextcloud and set up administrators password via our web: https://you-ip/nextcloud
Now you must complete the installation via web interface. Set Administrator’s password and locate to MariaDB with used credentials:

Database user: nextclouduser
Database password: YOURPASSWORD
Database name: nextcloud
host: localhost

In my case, I must create a DATA folder under out nextcloud and set permissions:

mkdir /var/www/html/nextcloud/data
chown apache:apache /var/www/html/nextcloud/data -R
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
restorecon -Rv '/var/www/html/nextcloud/'

For easier access, I created a permanent redirect for my IP/domain Nextcloud root folder. This redirect allow you to open page

https://your-ip

and redirect you to:

https://your-ip/nextcloud

You must edit httpd.conf file and add this line into directory /var/www/html:

vim /etc/httpd/conf/httpd.conf
...
RedirectMatch ^/$ https://your-ip/nextcloud
...
systemctl restart httpd.service

If we see an error like “Your data directory and files are probably accessible from the Internet. The .htaccess file is not working. ” try edit and change variable

vim /etc/httpd/conf/httpd.conf
....
<Directory "/var/www/html">
    AllowOverride All
    Require all granted
    Options Indexes FollowSymLinks
</Directory>

Enable updates via the web interface

To enable updates via the web interface, you may need this to enable writing to the directories:

setsebool httpd_unified on

When the update is completed, disable write access:

setsebool -P httpd_unified off

Disallow write access to the whole web directory

For security reasons it’s suggested to disable write access to all folders in /var/www/ (default):

setsebool -P  httpd_unified  off

A way to enable enhanced security with own configuration file

vim  /etc/httpd/conf.d/owncloud.conf
...
Alias /nextcloud "/var/www/html/nextcloud/"
<Directory /var/www/html/nextcloud/>
  Options +FollowSymlinks
  AllowOverride All
 <IfModule mod_dav.c>
  Dav off
 </IfModule>
 SetEnv HOME /var/www/html/nextcloud
 SetEnv HTTP_HOME /var/www/html/nextcloud
</Directory>

How to resize virtualbox fixed vdi storage to dynamic or fixed larger file

This short post show you, how to resize small vhd/vdi file to one bigger file. And this bigger file can be dynamic or fixed size on hard drive. I working on SSD disk, so it is very fast 🙂 I use comnad line in windows (start > run > cmd). And enter into virtualbox directory:

C:\Users\user>cd c:\
c:\>cd "Program Files\Oracle\VirtualBox"\

So, the input file is “e:\virtual_small.vhd” :

c:\Program Files\Oracle\VirtualBox>VBoxManage.exe showhdinfo e:\virtual_small.vhd
UUID:           617f112b-dac5-4e96-b435-437203992efa
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       e:\virtual_small.vhd
Storage format: VHD
Format variant: fixed default
Capacity:       15360 MBytes
Size on disk:   15360 MBytes
Encryption:     disabled

So, input file is small and we want larger. We must clone it into new one file, dynamically allocated:

c:\Program Files\Oracle\VirtualBox>VBoxManage.exe clonehd e:\virtual_small.vhd e:\virtual_dyn.vhd
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone medium created in format 'VHD'. UUID: b48eebd1-daa5-4020-9774-d5ca4b985b45
c:\Program Files\Oracle\VirtualBox>VBoxManage.exe showhdinfo e:\virtual_dyn.vhd
UUID:           b48eebd1-daa5-4020-9774-d5ca4b985b45
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       e:\virtual_dyn.vhd
Storage format: VHD
Format variant: dynamic default
Capacity:       15360 MBytes
Size on disk:   15245 MBytes
Encryption:     disable

Now, we can resize it to new size, perhaps 25000MB:

c:\Program Files\Oracle\VirtualBox>VBoxManage.exe modifyhd e:\virtual_dyn.vhd --resize 25000
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
c:\Program Files\Oracle\VirtualBox>VBoxManage.exe showhdinfo e:\virtual_dyn.vhd
UUID:           fe1c2a26-39d4-4f31-b4da-bc688b4a3c22
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       e:\virtual_dyn.vhd
Storage format: VHD
Format variant: dynamic default
Capacity:       25000 MBytes
Size on disk:   15247 MBytes
Encryption:     disabled

And now, we can clone it into fixed size. Fixed size of this disk is better for performance on classic disk. Dynamic is better on SSD disks, because there is never-ending resize of this file and virtualbox must allocate new space if the virtual machine grows in lifetime. So dynamic file allocate its space at the beginning. It ok for me, because I don’t care about the space of this file on beginning.

c:\Program Files\Oracle\VirtualBox>VBoxManage.exe clonehd e:\virtual_dyn.vhd e:\virtual_static.vhd --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone medium created in format 'VHD'. UUID: 3ddb4a53-a767-478f-8dc7-f670610320ca
c:\Program Files\Oracle\VirtualBox>VBoxManage.exe showhdinfo e:\virtual_static.vhd
UUID:           3ddb4a53-a767-478f-8dc7-f670610320ca
Parent UUID:    base
State:          created
Type:           normal (base)
Location:       e:\virtual_static.vhd
Storage format: VHD
Format variant: fixed default
Capacity:       25000 MBytes
Size on disk:   25000 MBytes
Encryption:     disabled

Have a nice day.

Rescue disk with ddrescue from ubuntu

I have a broken disk, partially working. This is part of dmesg after plug-in USB removable 2,5″ disk, and list from fdisk:

[1448.206941] blk_update_request: I/O error, dev sdb, sector 6293504
fdisk -l /dev/sdb
Disk /dev/sdb: 931,5 GiB, 1000170586112 bytes, 1953458176 sectors
......
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953458175 1953456128 931,5G  7 HPFS/NTFS/exFAT

So I will try to rescue some data from it. I will use a gddrescue program:

apt-get install gddrescue

And now, I have mounted a big 3TB nfs storage, where I will save image of this disk:

ddrescue -r1 -v -d /dev/sdb /mnt/nfs/sdb.img /mnt/nfs/sdb.log
  • -r1  means, that ddrescue will try read every block one time before giving it up on this block (reading from it)
  • -v  means verbose mode
  • -d means, that ddrescue use direct disk access and ignore kernel’s cache
  • /dev/sdb is the failing drive
  • /mnt/nfs/sdb.img is the destination image, where we save any data
  • /mnt/nfs/sdb.log is the log file, where is written every bad block and actual position of ddrescue. We can brake this rescue at any time and continue it later with the same command. When ddrescue finish, we can repeat this check only on bad blocks with more retries

 

  • 22.3.2017 – it was stared. post will continue after it finished 😀 maybe it take 3 days to finish, maybe more 🙂 This operation takes a long time to finish…

how to set up drbd primary-primary mode on proxmox 4.x

Today, I met with an interesting problem. I tried to create a primary-primary (dual primary) DRBD cluster on proxmox.
The first we must have fully configured proxmox Two-node cluster. Like this:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster
We must have a good configuration of /etc/hosts to resolve names into IP:

root@cl3-amd-node1:/etc/drbd.d# cat /etc/hosts
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.104 cl3-amd-node1 pvelocalhost
192.168.1.108 cl3-amd-node2
root@cl3-amd-node2:/etc/drbd.d# cat /etc/hosts
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.104 cl3-amd-node1
192.168.1.108 cl3-amd-node2 pvelocalhost

One server was created on hardware raid PCI-E LSI 9240-4i (/dev/sdb) and second server was build on software raid via mdadm (/dev/md1) on debian jessie with installation with proxmox packages. So the backend for drbd devices was on one side – hardware raid and software raid on the other side.  We must create a two disks with the same size (in sectors):

root@cl3-amd-node1:
fdisk -l /dev/sdb
Disk /dev/sdb: 1.8 TiB, 1998998994944 bytes, 3904294912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1        2048 1953260927 1953258880 931.4G 83 Linux
root@cl3-amd-node2:
fdisk -l /dev/md1
Disk /dev/md1: 931.4 GiB, 1000069595136 bytes, 1953260928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/md1p1       2048 1953260927 1953258880 931.4G 83 Linux

Now, we must have a direct network to each other of servers for drbd traffic, which will be very high. I use a bond of two gigabit network cards:

#cl3-amd-node1:
cat /etc/network/interfaces
auto bond0
iface bond0 inet static
        address  192.168.5.104
        netmask  255.255.255.0
        slaves eth2 eth1
        bond_miimon 100
        bond_mode balance-rr
#cl3-amd-node2:
cat /etc/network/interfaces
auto bond0
iface bond0 inet static
        address  192.168.5.108
        netmask  255.255.255.0
        slaves eth1 eth2
        bond_miimon 100
        bond_mode balance-rr

And we can test the speed of this network with package iperf:

apt-get install iperf

We start an iperf instance on one server by this command:

#cl3-amd-node2
iperf  -s -p 888

And from the other, we connect to this instance for 20 seconds:

#cl3-amd-node1
iperf -c 192.168.5.108 -p 888 -t 20
#and the conclusion
------------------------------------------------------------
Client connecting to 192.168.5.108, TCP port 888
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.5.104 port 49536 connected with 192.168.5.108 port 888
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  4.39 GBytes  1.88 Gbits/sec

So we can see, that I have a bonded network from two network cards and the resulting speed is almost 2Gbps.
Now, we can continue with installing and setting up the drbd resource.

apt-get install drbd-utils drbdmanage

All aspects of DRBD are controlled in its configuration file, /etc/drbd.conf. Normally, this configuration file is just a skeleton with the following contents:
include “/etc/drbd.d/global_common.conf”;
include “/etc/drbd.d/*.res”;
The simplest configuration is:

cat /etc/drbd.d/global_common.conf
global {
        usage-count yes;
}
common {
        net {
        protocol C;
        }
}

And the configuration of resource itself. It must be the same on both nodes:

root@cl3-amd-node1:/etc/drbd.d# cat /etc/drbd.d/r0.res
resource r0 {
disk {
        c-plan-ahead 15;
        c-fill-target 24M;
        c-min-rate 90M;
        c-max-rate 150M;
}
net {
        protocol C;
        allow-two-primaries yes;
        data-integrity-alg md5;
        verify-alg md5;
}
on cl3-amd-node1 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 192.168.5.104:7789;
        meta-disk internal;
}
on cl3-amd-node2 {
        device /dev/drbd0;
        disk /dev/md1p1;
        address 192.168.5.108:7789;
        meta-disk internal;
}
}
root@cl3-amd-node2:/etc/drbd.d# cat /etc/drbd.d/r0.res
resource r0 {
disk {
        c-plan-ahead 15;
        c-fill-target 24M;
        c-min-rate 90M;
        c-max-rate 150M;
}
net {
        protocol C;
        allow-two-primaries yes;
        data-integrity-alg md5;
        verify-alg md5;
}
on cl3-amd-node1 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 192.168.5.104:7789;
        meta-disk internal;
}
on cl3-amd-node2 {
        device /dev/drbd0;
        disk /dev/md1p1;
        address 192.168.5.108:7789;
        meta-disk internal;
}
}

Now, we must create and initialize backend devices for drbd, on both nodes:

drbdadm create-md r0
#answer yes to destroy possible data on devices

Now, we can start the drbd service, on both nodes:

root@cl3-amd-node2:/etc/drbd.d# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.
root@cl3-amd-node1:/etc/drbd.d# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.

Or we can start it on both nodes:

drbdadm up r0

And we can see it as inconsistent and both of them are secondary:

root@cl3-amd-node1:~# drbdadm status
r0 role:Secondary
  disk:Inconsistent
  cl3-amd-node2 role:Secondary
    peer-disk:Inconsistent

Start the initial full synchronization. This step must be performed on only one  node, only on initial resource configuration, and only on the node you selected as the synchronization source. To perform this step, issue this command:

root@cl3-amd-node1:# drbdadm primary --force r0

And we can see the status of our drbd storage:

root@cl3-amd-node2:~# drbdadm status
r0 role:Secondary
  disk:Inconsistent
  cl3-amd-node1 role:Primary
    replication:SyncTarget peer-disk:UpToDate done:3.10

After synchronization successfully finish, we set up our secondary server to be primary:

root@cl3-amd-node2:~# drbdadm status
r0 role:Secondary
  disk:UpToDate
  cl3-amd-node1 role:Primary
    peer-disk:UpToDate
root@cl3-amd-node2:~# drbdadm primary r0

And we can see status of this dual-primary (primary-primary) drbd storage resource:

root@cl3-amd-node2:~# drbdadm status
r0 role:Primary
  disk:UpToDate
  cl3-amd-node1 role:Primary
    peer-disk:UpToDate

Now we have a new block device on both servers:

root@cl3-amd-node2:~# fdisk -l /dev/drbd0
Disk /dev/drbd0: 931.4 GiB, 1000037986304 bytes, 1953199192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

We can configure this drbd block device as physical volume for lvm. This lvm is on top of this drbd. So, we can continue as it is a physical disk. Do it only on one server. The change will reflect on second server, due to primary-primary disk of drbd:

pvcreate /dev/drbd0
  Physical volume "/dev/drbd0" successfully created

As we can see, we must adapt /etc/lvm/lvm.conf to our needs, because it scans all block devices and we can found duplicate entries:

root@cl3-amd-node2:~# pvs
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/drbd0 not /dev/md1p1
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/md1p1 not /dev/drbd0
  Found duplicate PV WXwDGteoexfmLxN6GQvt6Nd3jJxgvT2z: using /dev/drbd0 not /dev/md1p1
  PV         VG   Fmt  Attr PSize   PFree
  /dev/drbd0      lvm2 ---  931.36g 931.36g
  /dev/md0   pve  lvm2 a--  931.38g      0

So, we must edit filter option in this configuration.  Look at our resouce configuration r0.res. We must exlude our backend devices (/dev/sdb1 on one server and /dev/md1p1 on second server), or we can reject all devices and allow only specific. I prefer reject all and allow only what we want. So edit the filter variable.

root@cl3-amd-node1:~# cat /etc/lvm/lvm.conf | grep drbd
     filter =[ "a|/dev/drbd0|", "a|/dev/sda3|", "r|.*|" ]
root@cl3-amd-node2:~# cat /etc/lvm/lvm.conf | grep drbd
    filter =[ "a|/dev/drbd0|", "a|/dev/md0|", "r|.*|" ]

Now, we don’t see duplicates and  we can create a volume group. Only on one server:

root@cl3-amd-node2:~# vgcreate drbd0-vg /dev/drbd0
  Volume group "drbd0-vg" successfully created
...
root@cl3-amd-node2:~# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/drbd0 drbd0-vg lvm2 a--  931.36g 931.36g
  /dev/md0   pve      lvm2 a--  931.38g      0

And finally we add the LVM group to the proxmox. It can be done via web interface. So, go to proxmox web interface to Datacenter, click on storage and add (LVM).
Then create your ID (this is the name of your storage. It can not be changed later. Maybe: drbd0-vg),  next you will see the previously created volume group drbd0-vg. So select it and enable the sharing by click the ‘shared’ box.
Now, we can create virtual machine on this LVM and when we can migrate it without downtime from one server to another because of drbd. There is one shared storage. So when the migration starts, machine is started on another server and through ssh tunnel is migrate content of ram. And after few seconds, it is started.
Sometimes, after some circumstances with network disconnect and connect, there is split-brain detected. So if this happened, don’t panic. When this happened, both servers are marked as “standalone” and drbd storage started to diverge. From this time there happened different writes to both sides. We must one of this servers mark as victim, because one of these servers has the “right” data and the other has “wrong” data. So the only way is backup the running virtuals on the “victim” and then we must destroy/discard this data on drbd storage and synchronize it from other server, which has “right” data. So if this is happening, this is in logs:

root@cl3-amd-node1:~# dmesg | grep -i brain
[499210.096185] drbd r0/0 drbd0 cl3-amd-node1: helper command: /sbin/drbdadm initial-split-brain
[499210.097306] drbd r0/0 drbd0 cl3-amd-node1: helper command: /sbin/drbdadm initial-split-brain exit code 0 (0x0)
[499210.097313] drbd r0/0 drbd0: Split-Brain detected but unresolved, dropping connection!

We must manually solve this problem. So I choose as victim: cl3-amd-node1. We must set this node as secondary:

drbdadm secondary r0

And now, we must disconnect it and connect it back with marking data to be discarded.

root@cl3-amd-node1:~# drbdadm connect --discard-my-data r0

And after synchronization, mark it back to primary node:

root@cl3-amd-node1:~# drbdadm primary r0

And in log, we can see:

cl3-amd-node1 kernel: [246882.068518] drbd r0/0 drbd0: Split-Brain detected, manually solved. Sync from peer node

Have fun.
 

How to create software raid 1 with mdadm with spare

At first, we must create partitions on disks with the SAME size in blocks:

fdisk /dev/sdc
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
fdisk /dev/sdd
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
fdisk -l /dev/sdc
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdc1        2048 1953525167 1953523120 931.5G fd Linux raid autodetect
root@cl3-amd-node2:~# fdisk -l /dev/sdd
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Device     Boot Start        End    Sectors   Size Id Type
/dev/sdd1        2048 1953525167 1953523120 931.5G fd Linux raid autodetect

Now, we can create raid using a mdadm. Parameter –level=1 defines raid1.

 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

We can watch the progress of building the raid:

cat /proc/mdstat
md1 : active raid1 sdd1[1] sdc1[0]
      976630464 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  1.8% (17759616/976630464) finish=110.0min speed=145255K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

Now we can add a spare disk:

fdisk /dev/sde
> n (new partition)
> p (primary type of partition)
> 1  (partition number)
> 2048 (first sector: default)
> 1953525167 (last sector: default)
> t (change partition type) - selected partition nb. 1
> fd (set it to Linux raid autodetect)
> w (write end exit)
mdadm --add-spare /dev/md1 /dev/sde1

And now we can see detail of the raid:

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar 14 11:56:28 2017
     Raid Level : raid1
     Array Size : 976630464 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
  Intent Bitmap : Internal
    Update Time : Tue Mar 14 12:00:49 2017
          State : clean, resyncing
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
  Resync Status : 3% complete
           Name : cl3-amd-node2:1  (local to host cl3-amd-node2)
           UUID : 919632d4:74908819:4f43bba3:33b89328
         Events : 52
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        -      spare   /dev/sde1

And we can it see here too:

cat /proc/mdstat
md1 : active raid1 sde1[2](S) sdd1[1] sdc1[0]
      976630464 blocks super 1.2 [2/2] [UU]
      [=>...................]  resync =  7.5% (73929920/976630464) finish=103.3min speed=145533K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices: <none>

After reboot, if we can not see our md1 device like this:

root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sde1[2](S) sdb1[1]
      976629760 blocks super 1.2 [2/2] [UU]
      bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>

We can recreate (assemble) it with this command without resync:

mdadm --assemble /dev/md1 /dev/sdc1 /dev/sdd1
mdadm: /dev/md1 has been started with 2 drives.
root@cl3-amd-node2:/etc/drbd.d# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc1[0] sdd1[1]
      976630464 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sde1[2](S) sdb1[1]
      976629760 blocks super 1.2 [2/2] [UU]
      bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>

If we want to automatically start this raid with the boot, we must add this array to mdadm.conf. At first, we scan for our arrays and add it to /etc/mdadm/mdadm.conf.

root@cl3-amd-node2:/etc/drbd.d# mdadm --examine --scan
...
ARRAY /dev/md/1  metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1
ARRAY /dev/md/0  metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0
   spares=1
cat /etc/mdadm/mdadm.conf
...
# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0
   spares=1
echo "ARRAY /dev/md/1  metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1" >> /etc/mdadm/mdadm.conf

And the last step is update the initramfs to update mdadm.conf in it:

update-initramfs -u

If there is a need to replace bad missing disk, we must create a partition on new disk with the same space.

fdisk -l /dev/sdb
Disk /dev/sdb: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1        2048 488397167 488395120 232.9G fd Linux raid autodetect

Degraded array:

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri May 27 09:08:25 2016
     Raid Level : raid5
     Array Size : 488132608 (465.52 GiB 499.85 GB)
  Used Dev Size : 244066304 (232.76 GiB 249.92 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent
    Update Time : Thu Apr 20 11:33:11 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : cl2-sm-node3:1  (local to host cl2-sm-node3)
           UUID : 827b1c8a:5a1a1e7c:1bb5624f:9aa491b1
         Events : 692
    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       65        1      active sync   /dev/sde1
       3       8       49        2      active sync   /dev/sdd1

Now we can add new disk to this array:

mdadm --manage /dev/md1 --add /dev/sdb1
   mdadm: added /dev/sdb1

And its done:

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdb1[4] sde1[1] sdd1[3]
      488132608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      [>....................]  recovery =  0.3% (869184/244066304) finish=197.5min speed=20515K/sec
      bitmap: 0/2 pages [0KB], 65536KB chunk

If we have a problem with some disk, we may remove it during work. At first, we must mark it as failed. So look at good and working raid-1:

mdadm --detail /dev/md0
/dev/md0:
 Raid Level : raid1
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 State : clean
 Active Devices : 2
 Working Devices : 3
 Failed Devices : 0
 Spare Devices : 1
active sync /dev/sda1
active sync /dev/sdb1
spare /dev/sde1

Now mark disk sda1 as faulty:

mdadm /dev/md0 -f /dev/sda1
mdadm --detail /dev/md0
/dev/md0:
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 Persistence : Superblock is persistent
 State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 1
 Spare Devices : 1
Rebuild Status : 0% complete
spare rebuilding /dev/sde1
active sync /dev/sdb1
faulty /dev/sda1
cat /proc/mdstat
md0 : active raid1 sda1[0](F) sde1[2] sdb1[1]
 976629760 blocks super 1.2 [2/1] [_U]
 [>....................] recovery = 0.2% (2292928/976629760) finish=169.9min speed=95538K/sec

I waited until finish this operation. Then I halted this server, remove the exact drive and insert a new one. After power-on, we create a new partition table on /dev/sda exactly as old one, or as active disks now. The we re-add it as spare to the raid:

 mdadm /dev/md0 -a /dev/sda1
mdadm --detail /dev/md0
/dev/md0:
 Raid Level : raid1
 Array Size : 976629760 (931.39 GiB 1000.07 GB)
 Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)
 Raid Devices : 2
 Total Devices : 3
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
 Spare Devices : 1
active sync /dev/sde1
active sync /dev/sdb1
spare /dev/sda1
cat /proc/mdstat
md0 : active raid1 sda1[3](S) sde1[2] sdb1[1]
 976629760 blocks super 1.2 [2/2] [UU]
 bitmap: 1/8 pages [4KB], 65536KB chunk

Setting up logrotate on Centos 7

Yesterday, I met with problem of low capacity /var/log/ partition. Some logs were too big and logrotate is the perfect tool to handle this problem. It is a software designed for reduce amount of space for every log file we have. And it can be done with some ways.
Logrotate Description: logrotate  is  designed  to  ease  administration of systems that generate large numbers of log files.  It allows automatic rotation, compression, removal, and mailing of log files.  Each log file may be handled daily, weekly, monthly, or when  it  grows too large.
Normally,  logrotate  is  run as a daily cron job.  It will not modify a log multiple times in one day So in few words, logrotate is reducing space usage on disk by log files.

Logrotate configuration

Configuration of logrotate is made in one main file: /etc/logrotate.conf and other service specific configuration files which are stored in /etc/logrotate.d/
So main sample configuration is:

# see "man logrotate" for details
# rotate log files weekly specified in /etc/logrotate.d/
weekly
# keep 4 weeks of all log files
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed by gzip
compress
# RPM packages drop log rotation information into this directory
#there are all other configurations of services and their logs to rotate
include /etc/logrotate.d

Some samples and real log files configurations

So, we can add a new logs file into /var/log/  by this way:

echo "this is a sample log file" > /var/log/vasil.log
#this create a log file vasil1.log of size 5MB
dd if=/dev/zero of=/var/log/vasil1.log bs=1M count=5

Next, we create a new configuration files which are stored in destination explained above:

vim /etc/logrotate.d/vasil
###
/var/log/vasil.log {
 missingok
 notifempty
 compress
 minsize 1M
 daily
 create 0600 root root
}
vim /etc/logrotate.d/vasil1
###
/var/log/vasil1.log {
 missingok
 notifempty
 compress
 minsize 1M
 daily
 create 0600 root root
}

And som explanation of variables:

  • missingok – do not output error if logfile is missing
  • notifempty – do not rotate log file if it is empty
  • compress – Old versions of log files are compressed with gzip by default
  • minsize – Log file is rotated only if it is bigger than 1M
  • daily – ensures daily rotation
  • create – creates a new log file with permissions 600 where owner and group is root user

If you want more options and their explanation, look into manual:

man logrotare

Look at list of /var/log for our log files. We can see, that we have one log vasil.log with size 26b and vasil1.log with size 5MB.

ls -lah /var/log/va*
-rw-r--r--. 1 root root 5.0M Mar  3 13:21 /var/log/vasil1.log
-rw-r--r--. 1 root root   26 Mar  3 13:21 /var/log/vasil.log

Now, we can debug our configuration via this command:

logrotate -d /etc/logrotate.d/vasil1
or
logrotate -d /etc/logrotate.d/vasil

So, if we want to run logrotate manualy and see, what is happend, run the following command. But be aware because it rotate all your logs, defined in /etc/logrotate.d/

logrotate -f /etc/logrotate.conf

And we can see both log files compressed and two new empty log files created:

 ls -lah /var/log/va*
-rw-------. 1 root root    0 Mar  3 13:23 /var/log/vasil1.log
-rw-r--r--. 1 root root 5.0K Mar  3 13:21 /var/log/vasil1.log-20170303.gz
-rw-------. 1 root root    0 Mar  3 13:23 /var/log/vasil.log
-rw-r--r--. 1 root root   44 Mar  3 13:21 /var/log/vasil.log-20170303.gz

We can look into our compressed log file by this command:

zcat /var/log/vasil.log-20170303.gz
this is a sample log file

Or we can use gunzip to uncompress them by command gzip.
When we use logrotate, sometimes we need restart an application or service. Logrotate can do that by script called “postrotate”. This script can be used in configuration file like httpd. When log are rotated,  script reload service to use new empty log file.

cat /etc/logrotate.d/httpd
/var/log/httpd/*log {
    missingok
    notifempty
    sharedscripts
    delaycompress
    postrotate
        /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
    endscript
}

So I hope, that this how to helps somebody 🙂 Have a fun.

How to install samba server on centos 7 with and without user and password

First, we must install package samba and accept all dependencies.

yum install samba -y

Create user, who can access our samba secure folder:

useradd -s /sbin/nologin user
groupadd smbgroup
usermod -a -G smbgroup user
smbpasswd -a user

Then, create a directories for samba shares. Chcon command mark our directory with label, that SELinux allows samba service to operate with this folder. Another possibility is disable SELinux, but it is not the right way 🙂

#for anonymous
mkdir -p /mnt/aaa
chmod -R 0777 /mnt/aaa
chcon -t samba_share_t /mnt/aaa -R
chown -R nobody:nobody /mnt/aaa
#for another secure user
mkdir -p /mnt/nfs/kadeco/
chmod -R 0755 /mnt/nfs/kadeco/
chcon -t samba_share_t /mnt/nfs/kadeco/ -R
chown -R user:smbgroup /mnt/nfs/kadeco/

Edit samba config for ours anonymous and secure shares

vi /etc/samba/smb.conf
[global]
 workgroup = home
 security = user
 passdb backend = tdbsam
 printing = cups
 printcap name = cups
 load printers = yes
 cups options = raw
 map to guest = bad user
[Anonymous-aaa]
        path = /mnt/aaa
        writable = yes
        browsable = yes
        guest ok = yes
        create mode = 0777
        directory mode = 0777
[kadeco]
        path = /mnt/nfs/kadeco
        writable = yes
        browsable = yes
        guest ok = no
        valid users = user
        create mask = 0755
        directory mask = 0755
        read only = No

Now, we can see our configuration of samba by this command and test it for errors:

testparm

Next, if we use firewall, we must add some ports, or service for samba to allow:

firewall-cmd --permanent --zone=public --add-port=137/tcp
firewall-cmd --permanent --zone-public --add-port=138/tcp
firewall-cmd --permanent --zone=public --add-port=139/tcp
firewall-cmd --permanent --zone=public --add-port=445/tcp
firewall-cmd --permanent --zone=public --add-port=901/tcp
firewall-cmd --reload
or we can use simple:
firewall-cmd --permanent --zone=public --add-service=samba
firewall-cmd --reload

And finally, start samba services and enable it, after reboot.

systemctl start smb.service
systemctl start nmb.service
systemctl enable smb.service
systemctl enable nmb.service

A way to restart samba services:

systemctl restart smb
systemctl restart nmb

And now we can user our samba server. Anonymous folder, or secured folder 🙂

If you want to access some folder for read from apache, just made a selinux modify:

Allow samba read/write access everywhere:

setsebool -P samba_export_all_rw 1
or if you want to be a little more descrite about it:
chcon -t public_content_rw_t /mnt/nfs/kadeco
2) setsebool -P allow_smbd_anon_write 1
3) setsebool -P allow_httpd_anon_write 1

This should allow both Samaba and Apache write access to public_content_rw_t context.

Status of samba we can list by this commands:

smbstatus -p
- show list of samba processes
smbstatus -S
- show samba shares
smbstatus -L
- show samba locks

If we need restart samba process, or restart server, we can list locked files by “smbstatus -L”. We can see, which share is locked and which specific file is accessing.

Have fun