I needed to migrate the following software bellow from Ubuntu 14 to 16:
- Just run
- in order to keep the configuration, it will be needed to copy the files in the
\etc/apache2dir to the destination server
- remove all traces of php5 from the system
- use the
ls sites-enabledto know which the configuration profiles, modules and virtual domains enabled to replicate the configuration in the destination server.
- After copying all the virtual domains / hosts configuration, and you don’t want to mess in the DNS, is useful the following command in order to know the virtualhosts are properly working:
curl http://<ip.of.your.new.server> -H "Host: yourvirtualdomain.tld"
There is a nite extension on Chrome that allows to test virtual domains, it is called “Virtual Hosts” that allows to change the domain while don’t want to tamper with your DNS.
Bye Bye MySQL, Hello MariaDB!
After experimenting with MySQL 5.7 in Ubuntu 16, started to have problems with the MySQL server crashing randomly without any apparent cause. Tried to reinstall version 5.5, which is the one running over Ubuntu 14, but this particular version is not available anymore for 16. So the option was to give up mySQL for a better alternative: mariaDB, which is a fork of MySQL by its original author. The name is different, everything else like configuration files, binaries is the same, to allow for a easy migration. First I tried the last stable version 10.2, but then I discovered I could not take the already existing storage engine of MySQL in mariaDB, had to export from the MySQL and then reimport and recreate the databases and users on MariaDB. PhpMyAdmin works well with mariaDB, so it was not that difficult to make the transition.
UPDATE NOTE: don’t use the MariaDB that comes from the Ubuntu repository, use the one provided by the MariaDB own repository, because the MariaDB from Ubuntu does not prompt you to type a password for the user root during installation and you’ll run into trouble not knowing which is the password created the automatic installation of the mariaDB Ubuntu version. In my case I added the line:
deb http://ftp.osuosl.org/pub/mariadb/repo/10.1/ubuntu xenial main
/etc/apt/sources.list. This repository is from a server on Universidade do Porto, you can pick your own repository mirror here
PhpMyAdmin does a good job in exporting all the database contents in on whole file. But it’s not a good idea to export the system tables – the ones that come by default with the database engine when it is installed. To copy users besides root between two MySQL servers you should extract the tables
table_priv. This copy the users names and passwords but not the permissions. For this use the option “export” on “Users” on phpmyadmin.
Between Ubuntu versions 14 and 16, there was a major release of Postfix, from 2.11 to 3.1 versions. I’m using virtual domains which are stored inside a MySQL table, postfix to query which are the authorized domains to receive mail for goes in this table, named
domains to know if the domain is there. The file My schema was based in the one described here, which use just four tables, with the essencial, I’m not using the one recommended to use on the Ubuntu Site.
Both postfix and courier are both dependant on the sasl daemon for authentication. So you’ll have to configure this daemon.
Follow the steps:
mkdir -p /var/spool/postfix/var/run/saslauthd
START=yeson the first line and
- Change the MECHANISMS line from
OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r -O localhost"
PARAMS="-m /var/spool/postfix/var/run/saslauthd -r"
pwcheck_method: saslauthd mech_list: plain login allow_plaintext: true
- Add the user
postfixto the group
adduser postfix sasl
- delete the default
/var/run/saslauthddir and create a symlink to the folder inside the postfix chroot:
ln -s /var/spool/postfix/var/run/saslauthd /var/run/saslauthd
NOTE in this step: It can happen in every restart, the folder
/var/run/saslauthd is recreated automatically. The solution will be creating a script that deletes the folder in every reboot and recreates the symlink:
SASL_DIR='/var/run/saslauthd' [-x $SASL_DIR] && rm -r $SASL_DIR ln -s /var/spool/postfix/$SASL_DIR $SASL_DIR chgrp sasl $SASL_DIR
Put this script anywhere and call it through the default script
- Restart postfix and saslauthd:
service postfix restart service saslauthd restart
NOTE: Still after I had issues in authentication (the message
imapd: authdaemon: s_connect() failed: No such file or directory appeared in the logs) so I found that the service
courier-authdaemon was not enabled:
service courier-authdaemon start
To ensure that this service is always loaded by default in every boot execute
sudo systemctl enable courier-authdaemon .
Is possible to test the authentication via the command
authtest firstname.lastname@example.org password .
saslauthd I decided to give up from using
pam which would use then mysql authentication for the SMTP authentication, because I couldn’t succeed in using the pam_mysql driver, so replaced it with
rimap (step 3. above), much more easily, now postfix delegates on the Courier IMAP the authentication.
This one nowadays is almost a requirement. Chances are if you don’t use
dkim, the email server of your destination will reject or mark the email coming from your domain as SPAM. So what happens is when Postfix receives a new mail to deliver remotely, it will invoke the OpenDKIM mirror to sign the message. I was running with problems since OpenDKIM in its last version is using UNIX sockets and not TCP ones . Thanks to the answer found here.
Had to change in
/etc/postfix/main.cf the following lines:
###DKIM milter_protocol = 2 milter_default_action = accept smtpd_milters = inet:localhost:12301 non_smtp_milters = inet:localhost:12301
###DKIM milter_protocol = 6 milter_default_action = accept smtpd_milters = unix:/var/run/opendkim/opendkim.sock non_smtpd_milters = unix:/var/run/opendkim/opendkim.sock
Socket inet:12301@localhost to
to reflect the change of protocol between versions.
Important note: do not forget to add the postfix user to the
sudo adduser postfix opendkim
Deal with procmail
To use the Maildir format, which postfix uses by default and to deliver the mail intended for local UNIX accounts, you’ll need to add these lines to
.procmailrc into each homedir:
MAILDIR=$HOME/Maildir/ #you'd better make sure it exists DEFAULT=$HOME/Maildir/ #completely optional LOGFILE=$HOME/logs/procmail.log #recommended
Then install the
heirlroom-mailx package into Ubuntu:
sudo apt install heirloom-mailx
Then add the following line to a file inside
Reissue certificates for SMTPD
In case of change in the name of the server there will be need to regenerate the
cd /etc/postfix openssl req -new -outform PEM -out smtpd.cert -newkey rsa:2048 -nodes -keyout smtpd.key -keyform PEM -days 365 -x509 chmod o= /etc/postfix/smtpd.key
and answer the question “Common Name (eg, YOUR name) ” with the FQDN.
Another issues with Postfix
Some problems can occur with the config files inside the dir
/etc/postfix as revealed in the mail logs. Some can be fixed by :
chown root:postdrop dynamicmaps.cf .
Courier is the Mail Delivery Server. Uses POP or IMAP to communicate with a offline email client like Thunderbird. Is needed only if there is no webmail site for your domain.
If courier is already configured to use mysql as the authentication endpoint, delete the certificates (
rm *.pem) inside the
/etc/courier dir and change the files
pop3d.cnf to reflect new FQDN of the server.[^2]
Then recreate the certificates:
-> I had to change these two scripts located in
/usr/lib/courier/ because the
openssh discontinued the option
gendh in favour of
And after that restarted the five
BIND / NAMED
The hosts files are stored in
/var/lib/bind . The remaining ones of the global configuration are in
Just make a backup of only the webmin own configuration on Webmin > Backup Configuration Files > Backup now , uncheck “Server Configuration Files” and check “Webmin configuration files” and check “Download in browser”. The downloaded .tgz file can then be used on the destination webmin server uploading it in “Restore Now” and clicking “Restore” and choosing “yes” in “Apply configurations?”.
If using SQLite as the database engine for roundcube, don’t forget to copy the database file. In my case, this one was residing on
/var/lib/roundcube , file roundcube.sqlite3 .
If using a VPS that does not have a fixed IP address, and if your DNS registrar provide the option of dynamically assigning a new IP to a name address, personally I use namecheap. There is no changes to be made.
rsync is a great and useful command, but first you’ll have to configure SSH to allow login using keys, disabling the need for passwords every time you use the command (
ssh to login on the remote server, the same thing as
To copy entires directories and its entire tree perhaps is better to compress those directories into a archive. Personally I prefer to use the bzip2 format, which allows for greater compression than gzip:
tar cjvf <directory>.tar.bz2 <directory>
tar xjvf <directory>.tar.bz2
Perhaps is useful to exclude some files like compressed files specifying the file patterns to be excluded in a external file, and passing the
--exclude-from option to tar:
tar --exclude-from='exclude-patterns.txt' cjvf <directory>.tar.bz2 <directory>
exclude-patterns can have the following contents:
*.zip *.tar.* *.7z .git/*
NOTE: this file were converted from Markdown to HTML with the help of Pandoc.