Home Blog Page 15

Reasons why you should use Linux

Since its initial release in 1991, Linux had made tremendous leaps and bounds to target not only corporate entities but desktop and home users as well. If you are Windows or Mac user, chances are that you have interacted with Linux at one point or another without your knowledge. This is because, unlike a decade ago, Linux underpins most modern technologies. Smart devices such as smartphones, smart TVs and tablets ride on Android which is based on the modern version of the Linux kernel. Most modern websites are hosted on Linux servers on either Apache or Nginx web servers. Linux has also found its way to the Internet of Things (IoT) devices which are mostly used in research and development.

If you are not yet convinced on why you should embrace Linux in the ever-changing technological space, then here are more reasons why you should make the switch to Linux.

1) Linux is free and opensource

As you know, purchasing a Windows desktop license can be pricey, leave alone the server version. Mac OS comes already packaged in the hardware leaving no choice but to purchase the entire machine.

In sharp contrast, Linux is available for download at absolutely no cost! Save for a few enterprise distributions such as RHEL (Redhat Enterprise Linux) most Linux distributions are free to download and install. Additionally, most Linux distributions ship with out-of-the-box applications for everyday use such as Firefox browser, audio and video apps such as VLC and Mplayer, Office productivity tools like LibreOffice, calendar & calculator apps and so much more. Most Linux distributions have upped their game to provide stunning and elegant desktop environments such as GNOME, KDE Plasma and Deepin alongside a polished set of icons for desktop users and beginners.

One of the most crucial and striking aspects of Linux is the availability of its source code unlike proprietary systems like Windows and mac-OS. Users can not only view the code but also modify and redistribute the code to other users. Over time, this has given rise to multiple Linux flavors and a huge community of opensource developers.

2) Linux is stable and secure

Linux is renowned as a fast and stable operating system compared to its Windows counterpart. A Linux server can stay live for a long period without compromising its performance unlike Windows that needs periodic reboots to keep performing at optimal levels. For this reason, Linux comes as a system of choice for most cloud servers including web, database and application servers.

In comparison to Windows, Linux is considered more secure and offers a heightened degree of privacy & confidentiality. In fact, some Linux distributions such as Tails and Discreet Linux were developed with privacy in mind.

3) Availability of tons of software applications

I have used Linux for nearly 6 years and what has made my experience worthwhile and exciting is the availability of tons of packages from various repositories. Ubuntu repository alone provides over 50,000 software packages for download! Recently, they launched snap packages which are zipped packages that ship with their own libraries and dependencies which further simplify software installation. Additionally, you can browse an array of software application the Software Center that comes with almost every distro and install your preferred application.

4) Unlimited Customization

One of the most exciting things about Linux is the ability to customize its look and feel in every possible way you can fathom. You can install as many desktop environments as you with and get to savour the appeal that each has to offer. Additionally, you can tweak the Window managers, icons, notification bars and select your preferred theme or wallpaper. Each distribution is unique in its own way offering you a wide array of customization options to end up with the UI of your choice.

5) Jump-starting an old PC

If you have an old PC sitting somewhere gathering dust and wondering what to do with it, don’t dispose of it yet. The Linux community has availed some lightweight flavors that are ideal for old machines with low system specifications. With only 1 GB RAM and 5Gb of hard disk space, you can readily install any of these lightweight Linux distros: Linux Lite LXLE, AntiX, TinyCore, MX Linux and Sparky Linux to mention a few.

6) Stellar Community Support

Having been built around an opensource project, Linux boasts of a wide community of vibrant developers and enthusiasts who are ready to provide assistance in case you get stuck. More importantly, there are tons of online forums that you can visit and acquire more knowledge on various Linux tips and tricks.

Conclusion

As technology evolves with every passing minute, knowledge of using the Linux operating system is becoming valuable. In fact, now more than ever, there’s a huge demand for professionals with Linux skills. If you are in the IT industry, we cannot emphasize more how competency in Linux is. Linux will unlock doors to advanced technologies such as Docker and Ansible. We hope that we have convinced you enough on why you should use Linux and what the future holds for Linux and users with Linux expertise.

How to delete a directory in Linux

Once in a while, you might want to remove a directory in Linux to free up space or simply get rid of unwanted files or directories. Deleting a directory in Linux is quite easy and straight-forward and should work in all Linux distributions such as Ubuntu and Fedora. This guide takes the following approaches:

  1. Deleting an empty directory
  2. Deleting a directory with content ( files and subdirectories ).

In this tutorial, you will learn how to delete or remove a directory in Linux. For this guide, I’ll be using an instance of Ubuntu virtual machine. Without much further ado, let’s roll our sleeves and dig in.

How to remove an empty directory in Linux

To remove an empty directory, use the

rmdir
command, short for remove directory. Using the rmdir command only work if and only if the directory is empty.

The syntax for deleting an empty is as follows:

$ rmdir  directory_name

For example, to remove an empty directory, say , mydirectory, execute the command:

$ rmdir mydirectory

To demonstrate this, we are first going to create a directory in Linux and later delete it. To create a directory, use the mkdir command as shown:

$ mkdir mydirectory
how to create a directory in linux

To remove the directory, execute:

$ rmdir mydirectory
how to remove a directory in Linux

NOTE:

If the directory is not owned/created by the user, prefix the command with the sudo command. This grants sufficient privileges to the user deleting the directory, otherwise you will get a permissions’ error. Also, ensure that the user has been added to the sudoers group to be able to carry out elevated tasks.

Therefore, if you are deleting an empty directory owned by root or another user, use the following syntax:

$ sudo rmdir directory_name
remove a directory in linux using sudo command

What happens if you try to remove a non-empty directory? In that case, you will get an error splashed on the screen as shown”

'rmdir: failed to remove ‘mydirectory’: Directory not empty'
error while removing a directory in Linux

 

How to remove a directory with files and folder in Linux

As you saw previously, removing a non-empty directory using the rmdir command fails and, instead, prints out an error on the terminal.

To resolve this issue, use the rm command. Short for remove command, the rm command is used to remove not only ordinary files but both empty and non-empty directories. The command is often used in conjunction with other options to delete directories. the syntax for using the rm command is as follows:

$ rm [options] directory_name

If you want to delete a directory, use the -r or -R options to recursively delete the directory. This deletes the entire directory and its files and sub-directories.

$ rm -r directory_name
remove a directory recusrively

Again, use the sudo command if the user you are logged in as does not have ownership to the directory.

$ sudo rm -r directory_name

In some scenarios, you might stumble upon a write-protected directory that keeps prompting you if you would like to delete the files therein and the entire directory. To avoid getting such prompts which can be somewhat annoying and disruptive, append the -f argument to force deletion of all the files and directory without getting prompts.

$ rm -rf directory_name

Also, you can remove or delete several directories simultaneously in one command as shown:

$ rm -rf directory1 directory2 directory3

Be advised that the command deletes all the directories recursively i.e alongside their associated files and sub-directories. If you want to delete a directory recursively with the 0f option, you need to be cocksure about the directory’s contents.

Alternatively, you can use the -i option which gives you an interactive session and prompts if you would like to proceed to delete the sub-directories inside a directory. This comes in handy especially when you are unsure of some of the contents inside the directory.

$ rm -ri directory_name
prompt when deleting a directory

The downside to using the -i option is that it will keep prompting you to delete each file inside the directory. Imagine a scenario where you have a hundred or a thousand files you need to delete. Your guess is as good as mine how cumbersome that would be.

To remedy this problem, use the -I option which will only prompt you once when deleting each subfolder.

$ rm -rI directory_name
I option when deleting a directory

As mentioned earlier, the rm command can also be used to remove an empty directory. To accomplish this, simply use the -d option as shown:

$ sudo rm -d empty_directory

Delete directories using the find command

Traditionally, find command is used to search and locate files and directories based on a certain criterion. In addition, the find command can be used to find and delete files and directories that you want to get rid of.

For example, to delete a directory called directory, run the command:

$ sudo find . -type d -name "directory" -exec rm -rf {} +

Let’s look closer at some of the options in this command:

Period sign (.) – This indicates that the search operation is taknig place in the current working directory.

-type d – This option specifies that the search operation should return directories only and not files.

-name – This specifies the directory name.

-exec rm -rf – This forces the deletion of the directory alongside its subdirectories and files.

{} + – The option appends all the files at the end of the rm command.

 

Removing empty directories using find command

To purge all empty directories using find command, simply execute the command:

$ sudo find. -type d -empty -delete

 

Conclusion

We have covered avenues that you can use to delete directories on Linux: both empty and non-empty directories. We hope that going forward, you will not be stuck trying to delete directories from your system.

If you want to delete a single file, you can use the command “unlink” instead of “rm.” The syntax of the unlink command is similar to the rm command:

$ unlink filename

The unlink command shares another similarity with the rm command – once the commands are used to remove a file, the file becomes irrecoverable.

Learning to delete files from a Linux machine is a part of the process of becoming a Linux power user.

Now that you’ve learned how to do it, you are one step closer to mastering Linux. You can check out our other posts for more Linux guides. 

How to install Kali Linux on your PC

Based on Debian Kali Linux is a free and opensource operating system that is popularly used for penetration testing, ethical hacking, and digital forensics. It’s maintained by Offensive security and comes with an array of powerful forensic and exploitation tools that enable cybersecurity experts to carry out penetration tests with utmost efficiency.

The latest release of Kali Linux is Kali 2020.2 which was released on May 2020. It ships with new features such as:

  • A revamped login screen, new desktop background with polished desktop icons.
  • New dark and white themes.
  • Powershell integration.
  • New software packages such as Python 3.8, Nexnet, Joplin, and Spiderfoot.
  • New logos for every software package.

In this guide, we will walk you through a detailed procedure of installing Kali Linux and leveraging its vast array of penetration tools.

Step 1: Download Kali Linux

To get started, head out to the Kali Linux download page and download the latest Kali ISO image release which, by the time of penning down this guide, is Kali Linux 2020.2. The image comes in both 32-bit and 64-bit versions, so choose according to your preference.

On command-line, you can download the image by simply using the wget command as shown:

$ wget https://cdimage.kali.org/kali-2020.2/kali-linux-2020.2-installer-netinst-i386.iso

If you are on a good and stable internet connection, the download should take an hour or thereabouts. In my case, this took 45 minutes with a bandwidth of 8 Mbps.

Step 2: Create a bootable USB drive

Once you have downloaded the image, use Rufus utility tool to create a bootable USB drive that you are going to use to install Kali Linux. Upon completion, Plug the bootable pen drive or USB drive and reboot your PC.

As you do so. spare a moment to tweak your BIOS settings ( By pressing F10, F9, or the ESC key depending on the model of your PC) and ensure that your USB installation medium has the highest boot priority. Save the changes and proceed to boot your system.

Step 3: Install Kali Linux

Upon booting, you will be presented with the Kali Linux installer Menu with a list of installation options as shown. The first option allows you to install Kali Linux using a graphical interface. As this is the easier option compared to text-based mode, select it and simply hit the ‘ENTER’ key on the keyboard.

Kali Linux Installer menu

In the next step, select your preferred installation language and click on the ‘Continue‘ button.

select installation language for Kali

In the next step, select your location / country from the list provided then click ‘Continue‘. Usually, this list is influenced by the installation language selected earlier.

select location provided in Kali

Next, select your preferred keyboard layout and click on ‘Continue‘.

Select keyboard layout in Kali Linux

Next, provide your preferred hostname. Hostname is the name that will be used to identify your Kali Linux system in a network. It’s the equivalent of the NET-BIOS name in windows. So, type in your preferred hostname and click on ‘Continue‘ to proceed to the next step.

set up hostname for Kali Linux

Next, provide a domain name. However, this is not mandatory and you can leave it blank especially if you are not in a domain environment.

configure domain name for Kali Linux

Provide a name for the user you are about to add to Kali for the first time. This will be displayed upon logging in. Next click ‘Continue‘.

specify a full name for the new user in Kali Linux

In the next step, provide the username for the user’s account and click ‘Continue‘. This is the name that you will provide when logging in.

Specify username for the user account

Then, provide the user’s password. This is the password that will be used to login in to the system when the installation is complete.

Specify the password for the new user created in Kali

Next, select your desired timezone and Click ‘Continue

Configure the timezone

This takes you to the partitioning table as shown. The first option – ‘Guided -use Entire disk‘ allows the installer to automatically partition the hard disk without your intervention. Use this option if you are relatively new to Linux and not familiar with manual partitioning.

If you are comfortable with manually partitioning your hard drive, select the ‘Manual‘ option. But for now, we will go with the ‘Guided’ option.

guided-partitioning-kali-linux

The next step highlights your hard drive as shown. If you only have one hard drive on your PC, do nothing, and simply click ‘continue‘. However, If you have more than one hard drive on your system, select the drive that you’d want to install Kali Linux.

select the hard drive that you want to partition

In the next section, you will be presented with 3 options on how you’d like the partitioning to be done. If you are a new user or you don’t mind having all files in one partition, Select the first option. However, if you prefer having separate partitions, you can select the second or third options and click ‘Continue‘. For this guide, however, we will go with the first option.

Next, to save the partitioning scheme, select ‘Finish partitioning, and write changes to disk‘ option and click ‘Continue‘.

Finish partitioning and write changes to disk

When prompted to write changes to disk, select the ‘Yes‘ option. This applies the partition scheme on your hard drive.

Installation of the base system will begin and should take just a few minutes.

install base system Kali

If you intend to use Proxy server, specify it or leave the section blank if you are not going to use one.

configure http proxy

Next, select your preferred desktop environment and software packages you’d like included in your Kali Linux instance. Click ‘Continue‘ to commence the installation.

select the desktop environment and tools to install

This will take a while as the installer downloads and installs the software packages on your PC. This would also make a perfect time to grab yourself a cup of coffee and some snacks.

Installling Kali Linux

Once the installation is complete, you will get a notification below informing you that the installation has been successful. Click on the ‘Continue’ button to reboot your system and be sure to remove your bootable USB drive to installation medium.

You will be greeted with a login screen as shown below. Simply provide the username and password and hit the ‘Login in‘ button.

The default XFCE desktop environment appears as shown.

Conclusion

You can now start exploring your instance of Kali Linux and start leveraging its rich set of reconnaissance and exploitation tools to perform penetration tests and forensics.

To update Kali Linux and ensure that your software packages are up to date, perform a system upgrade by executing the commands:

$ sudo apt update -y

$ sudo apt upgrade -y

And that’s about it for this guide and we hope that you can now install Kali Linux on your PC without a hassle.

WordPress on Ubuntu 16.04 With Caddy

caddy web server

Introduction

WordPress is a famous content management system based on PHP and MySQL, distributed under the terms of the GNU GPLv2 (or later). In most cases it is installed by using Apache or NGINX as web servers, or, as we explained in this tutorial, it can run on an isolated environment like Docker containers.

Alongside these choices, there is a new web server which is rapidly gaining popularity: Caddy.

Caddy (or Caddy web server), is an open source, HTTP/2 web server which enables HTTPS by default, without requiring external configuration. Caddy also has a strong integration with Let’s Encrypt.

This tutorial explains how to install and configure WordPress on top of your Caddy web server, installed following our guide.

Install PHP

As we said in the introduction, WordPress requires a web server, MySQL and PHP. First of all, install PHP and the extensions required by WordPress, by executing the following command:

# apt-get install php7.0-fpm php7.0-mysql php7.0-curl php7.0-gd php7.0-mbstring php7.0-mcrypt php7.0-xml php7.0-xmlrpc

Verify that the PHP was correctly installed by checking its version:

$ php -v

Install and Configure MariaDB

MariaDB is also available in the repository, so just use apt:

# apt-get install mariadb-client mariadb-server

MariaDB is a MySQL fork, and it uses its name for the systemd service:

# systemctl start mysql

Set MariaDB root password to secure your database:

# mysql_secure_installation

You will be asked for the following configuration parameters:

Enter current password for root (enter for none): PRESS ENTER

Set root password? [Y/n] Y
ENTER YOUR PASSWORD

Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

Once that step is complete you can access the MariaDB database with your password:

$ mysql -u root -p

Create New Database and User

Start the MariaDB shell:

$ mysql -u root -p

Use the MariaDB prompt to create a new database for WordPress. In this tutorial, we use wordpressdb as the database name, and wordpressusr as the username for the WP installation. So our code looks like this:

mysql> CREATE DATABASE wordpressdb

DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
mysql> CREATE USER wordpressusr@localhost IDENTIFIED BY 'usr_strong_password'; mysql> GRANT ALL PRIVILEGES ON wordpressdb.* to wordpressusr@localhost IDENTIFIED BY 'usr_strong_password';

Next, you can flush privileges and exit:

mysql> FLUSH PRIVILEGES;
mysql> EXIT;

Install WordPress

Downloading and installing WordPress is quite an easy process, which requires executing just the following commands:

# cd /var/www
# wget wordpress.org/latest.zip
# unzip latest.zip

Change WordPress permissions with:

# chown -R www-data:www-data wordpress

Rename the WordPress config file and edit it:

# cd wordpress
# mv wp-config-sample.php wp-config.php
# $EDITOR wp-config.php

Here, change the database informations, using those specified during the MariaDB configuration process:

DB_NAME = wordpressdb
DB_USER = wordpressusr
DB_PASSWORD = usr_strong_password

Configure Caddy and Start WordPress Installation Wizard

This tutorial assumes you installed Caddy web server already. Edit its configuration file:

# $EDITOR /etc/caddy/Caddyfile

In this file, paste the following content:


<span class="highlight">example.com</span> {
    tls <strong><span class="highlight">admin@example.com</span></strong>
    root /var/www/wordpress
    gzip
    fastcgi / /run/php/php7.0-fpm.sock php
    rewrite {
        if {path} not_match ^\/wp-admin
        to {path} {path}/ /index.php?_url={uri}
    }
}

Note:

admin@example.com

is the email address that will be used for Let’s Encrypt certificate request.

Restart Caddy:

# systemctl restart caddy

As a last step, with a web browser, go to your website. This will start the WordPress GUI installation wizard which will finish the installation process and give you access to the WordPress dashboard.

Conclusion

At the end of the previous steps, a new WordPress instance will run on top of this new, tiny and powerful web server. Caddy will require certificates from Let’s Encrypt and enable automatically HTTPS connections, without any other manual configuration.

Caddy Web Server on Ubuntu 16.04

caddy web server

Introduction

Across our many tutorials we have looked at hundreds of different technologies. In almost every article, we’ve based our work on Apache or NGINX based servers.
However, there is a new web server which is gaining popularity due to its overall simplicity… welcome to the world of Caddy web server! This web server is entirely written in Go, and was first released in 2015. Caddy configuration is based on

Caddyfile

, and, as we will see in an example, these files are incredibly easy to write and manage.

What’s really got us excited, however, is the fact that it integrates Let’s Encrypt by default and without any manual configuration!

Caddy Features

  • Automatic HTTPS on by default, via Let’s Encrypt
  • HTTP/2 by default
  • Static files in the current working directory
  • All server types, directives, DNS providers, and other features are just plugins
  • Can be used like a library in other Go programs
  • Configurable to run system commands at startup and shutdown
  • Caddy is a single executable file with no dependencies at all except from the kernel

Impressive right? And this is not even an exhaustive list of the features available!

Now that you’re excited. Let’s take a look at how to install and use Caddy web browser on an Ubuntu 16.04 server.

Install Caddy Web Server

Caddy provides an installation script which downloads and installs Caddy binaries. As anticipated in the introduction, this web server has no dependencies.

Execute the following command:

$ curl https://getcaddy.com | bash

During the installation, the script will prompt for your password in order to gain administrative privileges.

The output will be:

 % Total % Received % Xferd Average Speed Time Time Time Current
 Dload Upload Total Spent Left Speed
100 5593 100 5593 0 0 3696 0 0:00:01 0:00:01 --:--:-- 3696
Downloading Caddy for linux/amd64...
https://caddyserver.com/download/linux/amd64?plugins=
Download verification OK
Extracting...
Putting caddy in /usr/local/bin (may require password)
[sudo] password for gmolica: 
Caddy 0.10.6
Successfully installed

Once the script has completed it’s work, Caddy binaries will be installed and ready to use, and, as we can see,

caddy

is on the 

/usr/local/bin/

directory.

What’s important to note is that the installation process will not create any system-wide configuration, so, this part is up to you. Luckily, the process is simple.

Configure Caddy

By default, Caddy will use the cthe directory it is being executed from as the root of the site, so, if you execute it from

$HOME

, it will use this as its root. This means, of course, that with Caddy it’s really easy to work on sites locally.

To execute:

$ caddy

The terminal will show the following message:

Activating privacy features... done.
http://:2015
WARNING: File descriptor limit 1024 is too low for production servers. 
At least 8192 is recommended. Fix with "ulimit -n 8192".

Right now, we can ignore the WARNING and note that

Caddy

runs on

localhost

, on port 2015.

Going with a web browser to

http://your_server_IP:2015

could redirect to a

404 Not Found

error page. This is due to the fact that the directory Caddy is using as its root does not contain a web site. Before moving on, create the required directories:

Creating Required Directories

First of all, create a directory that will contain the main

Caddyfile

:

# mkdir /etc/caddy

Change its owner to the root user and its group to www-data:

# chown -R root:www-data /etc/caddy

Create a second directory, where Caddy will store the SSL certificates and private keys:

# mkdir /etc/ssl/caddy

Change its owner to www-data:

# chown -R www-data /etc/ssl/caddy

Change the permissions, as follows:

# chmod 0770 /etc/ssl/caddy

Next, create the directory that will contain the site,

/var/www

:

# mkdir /var/www

This directory will be owned by www-data:

# chown www-data:www-data /var/www

Download the Caddy Unit File

By default, Caddy will not install itself as a systemd service, but the project provides an official unit file. Download it with the following command:

# curl -s https://raw.githubusercontent.com/mholt/caddy/master/dist/init/linux-systemd/caddy.service -o /etc/systemd/system/caddy.service

Looking at this file we will note the following lines:

; Letsencrypt-issued certificates will be written to this directory.
Environment=CADDYPATH=/etc/ssl/caddy

; Always set "-root" to something safe in case it gets forgotten in the Caddyfile.
ExecStart=/usr/local/bin/caddy -log stdout -agree=true -conf=/etc/caddy/Caddyfile -root=/var/tmp

This is why we set up those directories in the previous steps.

Create an empty

Caddyfile

:

# sudo touch /etc/caddy/Caddyfile

Execute the following commands to enable Caddy to run on boot:

# systemctl daemon-reload
# systemctl enable caddy

Check its status:

# systemctl status caddy
----------------------------------
â caddy.service - Caddy HTTP/2 web server
 Loaded: loaded (/etc/systemd/system/caddy.service; enabled; vendor preset: en
 Active: inactive (dead)
 Docs: https://caddyserver.com/docs

Allow HTTP and HTTPS Connections

Through

ufw

, allow both HTTP and HTTPS connections so  that Caddy will be able to correctly serve users:

# ufw allow http
# ufw allow https

Testing Caddy

Last step is to test Caddy to be sure that everything has been done correctly.

Edit Caddyfile

Previously, we created an empty

Caddyfile

. Now, it’s time to populate it. Open with a text editor:

# $EDITOR /etc/caddy/Caddyfile

In that file, paste the content:

example.com {
 root /var/www
 gzip
 tls gmolica@example.com
}

Note: the tls line contains the email address that will be used by Caddy to obtain SSL certificates from Let’s Encrypt.

Save and close.

Start Caddy:

# systemctl start caddy

Create a Web Page

Create a web page for testing Caddy:

$ echo '<h1>Website using Caddy</h1>' | sudo tee /var/www/index.html

Use the same root that you set up in the

Caddyfile

.

Now, with a web browser, just go to

https://example.com

, and you will see our test page!

Conclusion

We have seen how to install and use Caddy. Note how easy it can be to create

Caddyfile

to customize the server behaviour! Also keep in mind that the ease of use becomes even more evident in more complex environments.

Decentralized Communication with Matrix on Ubuntu 16.04

Decentralized Communication Matrix Logo

Introduction to Decentralized Communication

Matrix is an open standard for real-time, interoperable and decentralized communication over IP, used to power VoIP/WebRTC signalling, Internet of Things communication, Instant Messaging, and every program that requires a standard HTTP API for publishing and subscribing to data whilst tracking the conversation history.

Developed as an open initiative with no company behind it, its “longer term goal is for Matrix to act as a generic HTTP messaging and data synchronization system for the whole web – allowing people, services and devices to easily communicate with each other, empowering users to own and control their data and select the services and vendors they want to use”.

Besides being a standard, Matrix provides many features:

  • Open Standard HTTP APIs for transferring JSON messages (e.g. instant messages, WebRTC signalling)
  • Client<->Server API defining how Matrix compatible clients communicate with Matrix home servers.
  • Server<->Server API defining how Matrix home servers exchange messages and synchronize history with each other.
  • Application Service API defining how to extend the functionality of Matrix with ‘integrations’ and bridge to other networks.
  • Modules specifying features that must be implemented by particular classes of clients.
  • Open source reference implementations of clients, client SDKs, home serves and application services.

We mentioned home servers: they are what store account information and communication history, sharing data with the wider Matrix ecosystem by synchronizing the communication history with other home servers.

This tutorial is about the installation of Synapse, the reference home server implementation of Matrix.

Install Matrix

Matrix provides a repository for Ubuntu, so that installations can be handled through

apt

.

Add Matrix Repository

First of all, add the repository key:

$ wget -qO - https://matrix.org/packages/debian/repo-key.asc | sudo apt-key add -

Add the official Matrix repository by executing:

# add-apt-repository https://matrix.org/packages/debian/

Update

apt

packages index:

# apt-get update
Install Matrix Synapse

Install Synapse with

apt

:

# apt-get install matrix-synapse

During the installation process, enter a domain name and choose whether or not to send statistics to Matrix.

Start and Enable Matrix

Start Matrix with

systemctl
# systemctl start matrix-synapse

Enable it to start at boot time:

# systemctl enable matrix-synapse

Create a New User

Creating a new user for Matrix requires a shared secret. Generate a 32-character string that will be used as shared secret with:

# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

Copy the generated string, and then open the homeserver configuration file,

/etc/matrix-synapse/homeserver.yaml

, with a text editor:

# $EDITOR /etc/matrix-synapse/homeserver.yaml

In this file, look for

registration_shared_secret

. Uncomment that line and set its value as the 32-character string generated with the previous command:


# If set, allows registration by anyone who also has the shared
# secret, even if registration is otherwise disabled.
registration_shared_secret: "<strong><span class="highlight">urandom_generated_string</span></strong>"

Save and close the file.

Restart Matrix Synapse with

systemctl

:

# systemctl restart matrix-synapse

Now it is possible to create a new Matrix user. Use the

register_new_matrix_user

command as follows:

$ register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml https://
localhost:8448

Configure NGINX for Matrix

Create a new virtual host file for the domain used by Matrix:

# nano /etc/nginx/sites-available/example.com

In this new file, paste the following content:


server {
    listen 80;
    listen [::]:80;

    root /var/www/html;
    index index.html index.htm;

    server_name <span class="highlight">example.com</span> www.<span class="highlight">example.com</span>;

    <strong>location /_matrix</strong> {
        proxy_pass http://localhost:8008;
    }

    location ~ /.well-known {
        allow all;
    }
}

The location block needs to be set up for _matrix, since this is where all Matrix clients send requests.

Enable this newly created configuration:

# ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com

Test it with:

# nginx -t

Its output should be a syntax OK one.

Conclusion

Matrix is the basis for many different clients that can be used to connect to the configured homeserver and decentralize communication systems. This tutorial has covered the most basic steps for obtaining and running a powerful server for decentralized communication backed by Ubuntu 16.04.

Continuous Integration: Concourse CI on Ubuntu 16.04

Concourse Continuous Integration

Concourse Continuous Integration System

Concourse CI is a simple and scalable continuous integration system with an end goal of providing a system with as few distinct moving parts as possible. When transitioning from a CI system to another there are many risks, mainly due to the great number of variables that could accidentally change manually clicking around in the new system’s UI.

To reduce this risk, Concourse CI uses a declarative syntax. With it it’s possible to model any pipeline, from simple (unit, integration, deploy, ship) to complex (tests on multiple infrastructures, etc.).

In this tutorial we will see how to install Concourse CI on an Ubuntu 16.04 server, using PostgreSQL in the backend.

Getting Started – Install PostgreSQL

First of all, install PostgreSQL on the server. Concourse CI will use it to store its pipeline data. Execute the following command:

#

apt-get install postgresql postgresql-contrib

Next, create a PostgreSQL user which will manage the Concourse data within the database.  To do this, execute:


$ sudo -u postgres createuser concourseusr

By default, Concourse looks for (and attempts to connect to) a database named

atc

.

Create a new database:


$ sudo -u postgres createdb --owner=concourse atc

Install Concourse Continuous Integration

Download compiled executables for Linux. In the

/tmp

directory, execute the following command:

# curl -LO https://github.com/concourse/concourse/releases/download/v3.3.2/concourse_linux_amd64

Next, always in

/tmp

, download the latest available

fly

command line client:


# curl -LO https://github.com/concourse/concourse/releases/download/v3.3.2/fly_linux_amd64

Move both files to

/usr/local/bin

, renaming them, by executing the command:

# mv concourse* /usr/local/bin/concourse
# mv fly* /usr/local/bin/fly

 

Check if everything went fine by looking at both versions:

$ concourse --version
$ fly --version

They should be 3.3.2 at the time we write.

Concourse CI Configuration

Create a configuration directory:

# mkdir /etc/concourse

Generate Encryption Keys

All the related components which compose Concourse CI need to communicate securely with one another, in particular, the TSA and the worker. To be sure that this happens, we need to create three sets of keys:

  • Keys for the worker
  • keys for the TSA
  • session signing keys to sign tokens

These keys will be used automatically when component starts, so it is important to not use a password to lock keys. Executing the following commands, we will generate required keys:

# ssh-keygen -t rsa -q -N '' -f /etc/concourse/worker_key
# ssh-keygen -t rsa -q -N '' -f /etc/concourse/tsa_key
# ssh-keygen -t rsa -q -N '' -f /etc/concourse/session_key

The TSA decides which workers are authorized to connect to the system, so, authorize the worker’s public key. In our case, just execute the command:

# cp /etc/concourse/worker_key.pub /etc/concourse/authorized_worker_keys

Environment Configuration

Concourse executable does not read any configuration file, but this does not mean that it cannot be configured, of course. In fact, it takes its values from environment variables passed at the beginning of the process.

Create a new file for the

web

process configuration:

# $EDITOR /etc/concourse/web_env

In that file, paste the following content:

CONCOURSE_SESSION_SIGNING_KEY=/etc/concourse/session_key
CONCOURSE_TSA_HOST_KEY=/etc/concourse/tsa_key
CONCOURSE_TSA_AUTHORIZED_KEYS=/etc/concourse/authorized_worker_keys
CONCOURSE_POSTGRES_SOCKET=/var/run/postgresql

# Match your environment
CONCOURSE_BASIC_AUTH_USERNAME=your_usr_name
CONCOURSE_BASIC_AUTH_PASSWORD=strong_pwd
CONCOURSE_EXTERNAL_URL=http://server_IP:8080

Save, close and create a new file for the

worker

:

# $EDITOR /etc/concourse/worker_env

There, paste the following content:

CONCOURSE_WORK_DIR=/var/lib/concourse
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/etc/concourse/worker_key
CONCOURSE_TSA_PUBLIC_KEY=/etc/concourse/tsa_key.pub
CONCOURSE_TSA_HOST=127.0.0.1

Adjust the permissions of the environment files:

# chmod 600 /etc/concourse/w*_env

Create a User

Create a new user to run the

web

process. This user should match the PostgreSQL username created earlier, so execute:

# adduser --system --group concourseusr

Give this user ownership over Concourse CI configuration directory:


chown -R concourse:concourse /etc/concourse

Create Systemd Unit Files

Create a

concourse-web.service

file within the

/etc/systemd/system

directory:

# $EDITOR /etc/systemd/system/concourse-web.service

Paste the following content:

[Unit]
Description=Concourse CI web process (ATC and TSA)
After=postgresql.service

[Service]
User=concourse
Restart=on-failure
EnvironmentFile=/etc/concourse/web_env
ExecStart=/usr/local/bin/concourse web

[Install]
WantedBy=multi-user.target

Save and close. Create a file for the

worker

process:

# $EDITOR /etc/systemd/system/concourse-worker.service

In this file, paste:

[Unit]
Description=Concourse CI worker process
After=concourse-web.service

[Service]
User=root
Restart=on-failure
EnvironmentFile=/etc/concourse/worker_env
ExecStart=/usr/local/bin/concourse worker

[Install]
WantedBy=multi-user.target

Configure UFW

The Concourse

web

process listens for connections on port 8080, so open access to that port by executing the following

ufw

command:

# ufw allow 8080

For the

worker

part, we need to allow forwarding access, so:

# ufw default allow routed

Start Services

At this point, start both services:

# systemctl start concourse-worker concourse-web

Enable them to start at server boot time:

# systemctl enable concourse-worker concourse-web

Conclusion

From now on, the server is ready to execute all the continuous integration features provided by Concourse CI on an Ubuntu 16.04 system. Enjoy!

Database System: PostgreSQL Replication on Ubuntu 16.04

PostgreSQL Database System Logo

PostgreSQL Database System

PostgreSQL is an advanced open source Object-Relational Database Management System (or ORDBMS). It is an extensible and highly-scalable database system, meaning that it can handle loads ranging from single machine applications to enterprise web services with many concurrent users. PostgreSQL is transactional and ACID-compliant (Atomicity, Consistency, Isolation, Durability).

It supports a large part of the SQL standard, and offers many features including:

  • Complex queries
  • Foreign keys
  • Triggers
  • Updatable views
  • Transactional integrity
  • Multiversion concurrency control

As previously said, the PostgreSQL database system can be extended by its users. There are different ways to do this, like adding new functions, operators, data types, index methods, procedural languages, etc.

It is developed by the PostgreSQL Global Development Group and released under the terms of the PostgreSQL License.

PostgreSQL provides many ways to replicate a database. in this tutorial we will configure the Master/Slave replication, which is the process of syncing data between two database by copying from a database on a server (the master) to one on another server (the slave).

This configuration will be done on a server running Ubuntu 16.04.

Prerequisites

PostgreSQL 9.6 installed on the Ubuntu 16.04 Servers

Configure UFW

UFW (or Uncomplicated Firewall) is a tool to manage iptables based firewall on Ubuntu systems. Install it (on both servers) through

apt

by executing:

# apt-get install -y ufw

Next, add PostgreSQL and SSH service to the firewall. To do this, execute:

# ufw allow ssh
# ufw allow postgresql

Enable the firewall:

# ufw enable

Configure PostgreSQL Master Server

The master server will have reading and writing permissions to the database, and will be the one capable of performing data streaming to the slave server.

With a text editor, edit the PostgreSQL main configuration file, which is

/etc/postgresql/9.6/main/postgresql.conf

:

# $EDITOR /etc/postgresql/9.6/main/postgresql.conf

Uncomment the

listen_addresses

line and edit adding the master server IP address:

listen_addresses = 'master_server_IP_address'

Next, uncomment the

wal_level

line changing its value:

wal_level = hot_standby

To use local syncing for the synchronization level, uncomment and edit the following line:

synchronous_commit = local

We are using two servers, so uncomment and edit the two lines as follows:

max_wal_senders = 2
wal_keep_segments = 10

Save and close the file.

Edit the

pg_hba.conf

file for the authentication configuration.

# $EDITOR /etc/postgresql/9.6/main/pg_hba.conf
Paste the following configuration:

# Localhost
host    replication     replica          127.0.0.1/32            md5
 
# PostgreSQL Master IP address
host    replication     replica          master_IP_address/32            md5
 
# PostgreSQL SLave IP address
host    replication     replica          slave_IP_address/32            md5

Save, exit and restart PostgreSQL:

# systemctl restart postgresql

Create a User for Replication

Create a new PostgreSQL user for the replication process. Log in to the postgres user and start PostgreSQL shell:

# su - postgres
$ psql

Create a new user:

postgres=# CREATE USER replica REPLICATION LOGIN ENCRYPTED PASSWORD 'usr_strong_pwd';

Close the shell.

This concludes the master server configuration.

Configuring the Slave Server

The slave server won’t have writing permissions to the database, being that its only function is to accept streaming from the master. So it will have only READ permissions.

First, stop the PostgreSQL service:

# systemctl stop postgresql

Edit the PostgreSQL main configuration file:

# $EDITOR /etc/postgresql/9.6/main/postgresql.conf

In this file, uncomment the

listen_addresses

line and change its value:.

listen_addresses = 'slave_IP_address'

Next, uncomment the 

wal_level

line and change as follow:

wal_level = hot_standby

As in the master settings, uncomment the

synchronous_commit

line to use local syncing.

synchronous_commit = local

Also as in the master, uncomment and edit the following two lines:

max_wal_senders = 2
wal_keep_segments = 10

Enable hot_standby for the slave server by uncommenting the following line and changing its value:

hot_standby = on

Save and exit.

Copy Data From Master to Slave

To sync from master to slave server, the PostgreSQL main directory on the slave must be replaced with the main directory from the master. In the slave server, log in to the postgres user:

# su - postgres

Make a backup of the actual data directory:

$ cd

<span class="pun">/</span><span class="pln">var</span><span class="pun">/</span><span class="pln">lib</span><span class="pun">/</span><span class="pln">postgresql</span><span class="pun">/</span><span class="lit">9.6</span><span class="pun">/</span>
$ mv main main_bak

Create a new main directory:

$ mkdir main/

Change permissions:

$ chmod 700 main

At this point, copy the main directory from the master to the slave server by using

pg_basebackup

:

# pg_basebackup -h master_IP_address -U replica -D /var/lib/postgresql/9.6/main -P --xlog

Once the transfer is complete, in the main directory create a new

recovery.conf

file, and paste the following content:

standby_mode = 'on'
primary_conninfo = 'host=10.0.15.10 port=5432 user=replica password=usr_strong_pwd'
trigger_file = '/tmp/postgresql.trigger.5432'

Save, exit and change permissions to this file:

# chmod 600 recovery.conf

Start PostgreSQL:

# systemctl start postgresql

This concludes the slave server configuration.

Conclusion

We have seen how to configure the PostgreSQL master/slave replication, by using two servers running Ubuntu 16.04. This is just one of the many replication capabilities provided by this advanced and fully open source database system.

Control Tool: How To Install Webmin on CentOS 7

Webmin Control Tool

Introduction – A Web Based Control Tool

Webmin is a web control tool for UNIX (and other similar systems) which simplifies the management process. Normally, configuration and maintenance tasks require a lot of manual editing of text files, and the execution of command line tools, but with Webmin all these tasks can be performed through an easy to use web interface.

Some of the tasks Webmin can help with include:

  • Change system’s IP address, DNS Server settings and routing configuration
  • Share files with Windows systems through Samba
  • Install, view and remove packages in many formats, like RPM.
  • Export files and directories to other systems with the NFS protocol
  • Create and configure a virtual web server for Apache
  • Set up Disk Quotas to control how much space users can use up
  • Manage UNIX accounts
  • Manage databases, table and fields in MySQL or PostgreSQL Database Server

Webmin has a modular design, which means that its functions are contained in modules that can generally be installed or removed independently from the rest of the program. The ACH module is responsible for managing some services and servers.

Webmin is a configuration and control tool that will read the configuration files already present on the system and update them directly.

This tutorial explains how to install Webmin on a server running Apache Web Server on top of CentOS 7.

Install Webmin

First, add the Webmin repository to the YUM repositories list. Create a new file:


# $EDITOR /etc/yum.repos.d/webmin.repo

In this file paste the following content:

[Webmin]
name=Webmin Distribution Neutral
#baseurl=http://download.webmin.com/download/yum
mirrorlist=http://download.webmin.com/download/yum/mirrorlist
enabled=1

Save and exit.

Add the Webmin author’s key:

$ wget http://www.webmin.com/jcameron-key.asc
# rpm --import jcameron-key.asc

Next, install Webmin with

yum

:

# yum install webmin

At the end of the installation process, you should see the following message:

Webmin install complete. You can now login to https://example.com:10000/
as root with your root password.

Secure Connections with Let’s Encrypt

By default, Webmin is configured to use HTTPS by using a self-signed certificate. At this point, we need to replace this certificate with a valid one obtained from Let’s Encrypt.

With a web browser, go to 

https://example.com:1000

. During the first connection, the browser will signal an “Invalid SSL” error. This is due to the fact that Webmin is using a self-signed certificate that is not trusted. Allow an exception in order to continue replacing the certificate with the secure one.

Webmin will prompt a login screen. Here, sign in with “root” as the username and the current root password as the user.

After logging in, the browser will display the Webmin dashboard. First of all, set a hostname by clicking on the link to the right of System hostname. A new page will show up where it’s possible to change the Hostname, entering the FQDN into the correct field. Click on Save to apply the changes.

On the left bar, click on Webmin > Webmin Configuration. Click on SSL Encryption and open the Let’s Encrypt tab. In this tab, enter all the information required to secure the connections to Webmin with a valid certificate. Then, reload the page to use the new certificate.

Conclusion

At this point Webmin is correctly installed and running. Through its interface it is now possible to manage a UNIX-like system almost entirely without needing to manually edit text-only configuration files.

Note: This tool was created for those who are familiar with Linux but are not familiar with all the system administration details. If you already have strong skills and expertise pertaining to system maintenance, you may not like this tool as it will always be slower than editing configuration files directly.

Network Analysis: How To Install Bro On Ubuntu 16.04

Bro Network Analysis Logo

Introduction: Bro Network Analysis Framework

Bro is an open source network analysis framework with a focus on network security monitoring. It is the result of 15 years of research, widely used by major universities, research labs, supercomputing centers and many open-science communities. It is developed mainly at the International Computer Science Institute, in Berkeley, and the National Center for Supercomputing Applications, in Urbana-Champaign.

Bro has various features, including the following:

  • Bro’s scripting language enables site-specific monitoring policies
  • Targeting of high-performance networks
  • Analyzers for many protocols, enabling high-level semantic analysis at the application level
  • It keeps extensive application-layer stats about the network it monitors.
  • Bro interfaces with other applications for real-time exchange of information
  • It comprehensively logs everything and provides a high-level archive of a network’s activity.

This tutorial explains how to build from source and install Bro on an Ubuntu 16.04 Server.

Prerequisites

Bro has many dependencies:

Building from source requires also:

  • CMake 2.8+
  • Make
  • GCC 4.8+ or Clang 3.3+
  • SWIG
  • GNU Bison
  • Flex
  • Libpcap headers
  • OpenSSL headers
  • zlib headers

Getting Started

First of all, install all the required dependencies, by executing the following command:

# apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev

Install GeoIP Database for IP Geolocation

Bro depends on GeoIP for address geolocation. Install both the IPv4 and IPv6 versions:

$ wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
$

wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz

Decompress both archives:

$ gzip -d GeoLiteCity.dat.gz
$ gzip -d GeoLiteCityv6.dat.gz

Move the decompressed files to

/usr/share/GeoIP

directory:

# mv

GeoLiteCity.dat /usr/share/GeoIP/GeoIPCity.dat
# mv GeoLiteCityv6.dat /usr/share/GeoIP/GeoIPCityv6.dat

Now, it’s possible to build Bro from source.

Build Bro

The latest Bro development version can be obtained through

git

repositories. Execute the following command:

$ git clone --recursive git://git.bro.org/bro

Go to the cloned directory and simply build Bro with the following commands:

$ cd bro
$ ./configure
$ make

The make command will require some time to build everything. The exact amount of time, of course, depends on the server performances.

The

configure

script can be executed with some argument to specify what dependencies you want build to, in particular the

--with-*

options.

Install Bro

Inside the cloned

bro

directory, execute:

# make install

The default installation path is

/usr/local/bro

.

Configure Bro

Bro configuration files are located in the

/usr/local/bro/etc

directory. There are three files:

  • node.cfg

    , used to configure which node (or nodes) to monitor.

  • broctl.cfg

    , the BroControl configuration file.

  • networks.cgf

    , containing a list of networks in CIDR notation.

Configure Mail Settings

Open the

broctl.cfg

configuration file:

# $EDITOR /usr/local/bro/etc/broctl.cfg

Look for the Mail Options section, and edit the

MailTo

line as follow:

# Recipient address for emails sent out by Bro and BroControl
MailTo = admin@example.com

Save and close. There are many other options, but in most cases the defaults are good enough.

Choose Nodes To Monitor

Out of the box, Bro is configured to operate in the standalone mode. In this tutorial we are doing a standalone installation, so it’s not necessary to change very much. However, look at the

node.cfg

configuration file:

# $EDITOR /usr/local/bro/etc/node.cfg

In the

[bro]

section, you should see something like this:

[bro]
type=standalone
host=localhost
interface=eth0

Make sure that the interface matches the public interface of the Ubuntu 16.04 server.

Save and exit.

Configure Node’s Networks

The last file to edit is 

network.cfg

. Open it with a text editor:

# $EDITOR /usr/local/bro/etc/networks.cfg

By default, you should see the following content:

# List of local networks in CIDR notation, optionally followed by a
# descriptive tag.
# For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes.

10.0.0.0/8          Private IP space
172.16.0.0/12       Private IP space
192.168.0.0/16      Private IP space

Delete the three entries (which are just example for how to use this file), and enter the public and private IP space of your server, in the format:

X.X.X.X/X        Public IP space
X.X.X.X/X        Private IP space

Save and exit.

Manage Bro Installation with BroControl

Managing Bro requires using BroControl, which comes in form of an interactive shell and a command line tool. Start the shell with:

# /usr/local/bro/bin/broctl

To use as a command line tool, just pass an argument to the previous command, for example:

# /usr/local/bro/bin/broctl status

This will check Bro’s status, by showing output like:

Name         Type       Host          Status    Pid    Started
bro          standalone localhost     running   6807   20 Jul 12:30:50

Conclusion

This concludes the Bro’s installation tutorial. We used the source based installation because it is the most efficient way to obtain the latest version available, however this network analysis framework can also be downloaded in pre-built binary format.

See you next time!

NoSQL: How To Install OrientDB on Ubuntu 16.04

OrientDB NoSQL DBMS

Introduction – NoSQL and OrientDB

When talking about databases, in general, we refer to two major families: RDBMS (Relational Database Management System), which use as user and application program interface a language named Structured Query Language (or SQL) and non-relational database management systems, or NoSQL databases.

Between the two models there is a huge difference in the way they consider (and store) data.

Relational Database Management Systems

In the relational model (like MySQL, or its fork, MariaDB), a database is a set of tables, each containing one or more data categories organized in columns. Each row of the DB contains a unique instance of data for categories defined by columns.

Just as an example, consider a table containing customers. Each row correspond to a customer, with columns for name, address, and every required information.
Another table could contain an order, with product, customer, date and everything else. A user of this DB can obtain a view that fits its needs, for example a report about customers that bought products in a specific range of prices.

NoSQL Database Management Systems

In the NoSQL (or Not only SQL) database management systems, databases are designed implementing different “formats” for data, like a document, key-value, graph and others. The database systems realized with this paradigm are built especially for large-scale database clusters, and huge web applications. Today, NoSQL databases are used by major companies like Google and Amazon.

Document databases

Document databases store data in document format. The usage of this kind of DBs is usually raised with JavaScript and JSON, however, XML and other formats are accepted. An example is MongoDB.

Key-value databases

This is a simple model pairing a unique key with a value. These systems are performant and highly scalable for caching. Examples include BerkeleyDB and MemcacheDB.

Graph databases

As the name predicts, these databases store data using graph models, meaning that data is organized as nodes and interconnections between them. This is a flexible model which can evolve over time and use. These systems are applied where there is the necessity of mapping relationships.
Examples are IBM Graphs and Neo4j and OrientDB.

OrientDB

OrientDB, as stated by the company behind it, is a multi-model NoSQL Database Management System that “combines the power of graphs with documents, key/value, reactive, object-oriented and geospatial models into one scalable, high-performance operational database“.

OrientDB has also support for SQL, with extensions to manipulate trees and graphs.

Goals

This tutorial explains how to install and configure OrientDB Community on a server running Ubuntu 16.04.

Download OrientDB

On an up to date server, download the latest version of OrientDB by executing the following command:

$ wget -O orientdb-community-2.2.22.tar.gz http://orientdb.com/download.php?file=orientdb-community-2.2.22.tar.gz&os=linux

This is a tarball containing pre-compiled binaries, so extract the archive with

tar

:

$ tar -zxf orientdb-community-2.2.22.tar.gz

Move the extracted directory into

/opt

:

# mv orientdb-community-2.2.22 /opt/orientdb

Start OrientDB Server

Starting the OrientDB server requires the execution of the shell script contained in

orientdb/bin/

:

# /opt/orientdb/bin/server.sh

During the first start, this installer will display some information and will ask for an OrientDB root password:

+---------------------------------------------------------------+
| WARNING: FIRST RUN CONFIGURATION |
+---------------------------------------------------------------+
| This is the first time the server is running. Please type a |
| password of your choice for the 'root' user or leave it blank |
| to auto-generate it. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_ROOT_PASSWORD to the root password to use. |
+---------------------------------------------------------------+

Root password [BLANK=auto generate it]: ********
Please confirm the root password: ********

After that, the OrientDB server will start:

INFO OrientDB Server is active v2.2.22 (build fb2b7d321ea8a5a5b18a82237049804aace9e3de). [OServer]

From now on, we will need a second terminal to interact with the OrientDB server.

Stop OrientDB by hitting

Ctrl+C

.

Configure a Daemon

At this point, OrientDB is just a bunch of shell scripts. With a text editor, open

/opt/orientdb/bin/orientdb.sh

:

# $EDITOR /opt/orientdb/bin/orientdb.sh

In the first lines, we will see:

#!/bin/sh
# OrientDB service script
#
# Copyright (c) OrientDB LTD (http://orientdb.com/)

# chkconfig: 2345 20 80
# description: OrientDb init script
# processname: orientdb.sh

# You have to SET the OrientDB installation directory here
ORIENTDB_DIR="YOUR_ORIENTDB_INSTALLATION_PATH"
ORIENTDB_USER="USER_YOU_WANT_ORIENTDB_RUN_WITH"

Configure

ORIENTDB_DIR

and

ORIENTDB_USER

.

Create a user, for example orientdb, by executing the following command:

# useradd -r orientdb -s /sbin/nologin

orientdb is the user we enter in the

ORIENTDB_USER

line.

Change the ownership of

/opt/orientdb

:

# chown -R orientdb:orientdb /opt/orientdb

Change the configuration server file’s permission:

# chmod 640 /opt/orientdb/config/orientdb-server-config.xml

Install the Systemd Service

OrientDB tarball contains a service file,

/opt/orientdb/bin/orientdb.service

. Copy it to the

/etc/systemd/system

directory:

# cp /opt/orientdb/bin/orientdb.service /etc/systemd/system

Edit the OrientDB service file:

# $EDITOR /etc/systemd/system/orientdb.service

There, the

[service]

block should look like this:

[Service]
User=ORIENTDB_USER
Group=ORIENTDB_GROUP
ExecStart=$ORIENTDB_HOME/bin/server.sh

Edit as follow:

[Service]
User=orientdb 
Group=orientdb 
ExecStart=/opt/orientdb/bin/server.sh

Save and exit.

Reload systemd daemon service:

# systemctl daemon-reload

Start OrientDB and enable for starting at boot time:

# systemctl start orientdb
# systemctl enable orientdb

Check OrientDB status:

# systemctl status orientdb

The command should output:

● orientdb.service - OrientDB Server
 Loaded: loaded (/etc/systemd/system/orientdb.service; disabled; vendor preset: enabled)
 Active: active (running) ...

And that’s all! OrientDB Community is installed and correctly running.

Conclusion

In this tutorial we have seen a brief comparison between RDBMS and NoSQL DBMS. We have also installed and completed a basic configuration of OrientDB Community server-side.

This is the first step for deploying a full OrientDB infrastructure, ready for managing large-scale systems data.

eCommerce: Open eShop on Ubuntu 16.04

Open eShop eCommerce Platform

Introduction – Open eShop eCommerce Platform

Open eShop is an open source software for eCommerce platforms. It is a platform developed to sell digital goods without commissions.

Its features include:

  • Many payments system options, like Paypal, Card (using Paymill, Authorize, or Stripe) and Bitpay
  • Integrated Customer Support System, which fully supports clients through an easy interface, with notifications about new tickets
  • Discount coupons limited by product, time or availability
  • Fully mobile compatible and responsive
  • Optimized for search engines
  • Integrated Blog, FAQ and Forum infrastructures
  • Detailed tracking about store’s performance
  • License generation for sell digital goods

There are three options for getting Open eShop: Lite, Hosting and Pro. The first one is free of cost.

In this tutorial we will install Open eShop Lite on an Ubuntu 16.04 server.

Prerequisites

Getting Started – Create Database

Start the MySQL shell:

$ mysql -u root -p

Create a new user, named openeshop, and a new database, openeshop_db:

mysql> CREATE DATABASE openeshop_db;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER 'openeshop'@'localhost' IDENTIFIED BY 'usr_strong_pwd';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON openeshop_db.* TO 'openeshop'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

mysql> EXIT;
Bye

Install Open eShop Lite

Download Open eShop Lite in the web root directory, which is

/var/www/html

. First, create a new directory with the following command:

#

mkdir /var/www/html/openeshop

Move into this newly created directory:

# cd /var/www/html/openeshop

Download the Open eShop installation script, by executing the following

wget

command:


# wget https://raw.githubusercontent.com/open-classifieds/open-eshop/master/install-eshop.php

Change the owner of the downloaded file,

install-eshop.php

executing the following command:

# chown -R www-data:www-data install-eshop.php

Create an Apache Virtual Host File for Open eShop

Next step is to create a new virtual host file for Open eShop. We will name it

openeshop.conf

. Execute the command:

# $EDITOR /etc/apache2/sites-available/openeshop.conf

Paste in that file the following content:

<VirtualHost *:80>
 ServerAdmin admin@example.com
 DocumentRoot /var/www/html/openeshop
 DirectoryIndex install-eshop.php
 ServerName example.com
 ServerAlias www.example.com
<Directory /var/www/html/openeshop/>
 Options Indexes FollowSymLinks MultiViews
 AllowOverride All
 Order allow,deny
 allow from all
</Directory>
 ErrorLog /var/log/apache2/example.com-error_log
 CustomLog /var/log/apache2/example.com-access_log common
</VirtualHost>

Save, exit and enable the new Virtual Host file:

# a2ensite openeshop

It should display the following text:

Enabling site openeshop.
To activate the new configuration, you need to run:
 service apache2 reload

However, we will use

systemctl

to activate the new configuration by restarting the Apache service. Execute:

# systemctl restart apache2

Check the Apache status with:

# systemctl status apache2
apache2.service - LSB: Apache2 web server
 Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
 Drop-In: /lib/systemd/system/apache2.service.d
 ââapache2-systemd.conf
 Active: active (running)

So, the web server is correctly running.

Finish Open eShop Installation

At this point, with a web browser go to the URL configured in the Virtual Host file (

example.com

in this tutorial, but, of course, change it with the required domain).

The

install-eshop.php

installer should display the following message:

OE Installation requirement: Before you proceed with your OE installation: Keep in mind OE uses the short tag "short cut" syntax.

 Thus the short_open_tag directive must be enabled in your php.ini.

Easy Solution:
1. Open php.ini file and look for line short_open_tag = Off
2. Replace it with short_open_tag = On
3. Restart then your PHP server
4. Refresh this page to resume your OE installation
5. Enjoy OE ;)

On the server, open the

php.ini

file:

# $EDITOR /etc/php/7.0/apache2/php.ini

Change the line

short_open_tag = Off

with

short_open_tag = On

. It should be line 202. Save, exit and restart Apache:

# systemctl restart apache2

Reloading the page in the web browser should start the last step of the installation process, which depends on your ecommerce for digital goods.

Conclusion

In this tutorial we have seen how to easily install and configure an e-commerce platform for selling digital goods, like ebooks, music, etc. Depending on your business size, the Lite version of Open eShop may not be enough. The project also offers a Pro version, with more services, but, of course, that’s not free of cost.

Monitoring Server: Install Zabbix on an Ubuntu 16.04 Server

Zabbix logo

Monitoring Server – What is Zabbix

Zabbix is an enterprise-class open source distributed monitoring server solution. The software monitors different parameters of a network and the integrity of a server, and also allows the configuration of email based alerts for any event. Zabbix offers reporting and data visualization features based on the data stored in a database (MySQL, for example). Every metric collected by the software is accessible through a web-based interface.

Zabbix is released under the terms of the GNU General Public License version 2 (GPLv2), totally free of cost.

In this tutorial we will install Zabbix on an Ubuntu 16.04 server running MySQL, Apache and PHP.

Install the Zabbix Server

First, we’ll need to install a few PHP modules required by Zabbix:

# apt-get install php7.0-bcmath php7.0-xml php7.0-mbstring
The Zabbix package available in the Ubuntu repositories is outdated. Use the official Zabbix repository to install the latest stable version.

Install the repository package by executing the following commands:

$ wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
# dpkg -i zabbix-release_3.2-1+xenial_all.deb

Then update the

apt

packages source:

# apt-get update
Now it’s possible to install Zabbix Server with MySQL support and the PHP front-end. Execute the command:
# apt-get install zabbix-server-mysql zabbix-frontend-php

Install the Zabbix agent:

# apt-get install zabbix-agent

Zabbix is now installed. The next step is to configure a database for storing its data.

Configure MySQL for Zabbix

We need to create a new MySQL database, in which Zabbix will store the collected data.

Start the MySQL shell:

$ mysql -uroot -p

Next:

mysql> CREATE DATABASE zabbix CHARACTER SET utf8 COLLATE utf8_bin;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON zabbix.* TO zabbix@localhost IDENTIFIED BY 'usr_strong_pwd';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> EXIT;
Bye

Next, import the initial schema and data.

# zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix

Enter the password for the zabbix user created in the MySQL shell.

Next, we need to edit the Zabbix Server configuration file, which is

/etc/zabbix/zabbis_server.conf

:

# $EDITOR /etc/zabbix/zabbix_server.conf

Search the

DBPassword

section of the file:


### Option: DBPassword                          
#       Database password. Ignored for SQLite.  
#       Comment this line if no password is used.
#                                                
# Mandatory: no                                  
# Default:                                      
# DBPassword=

Uncomment the

DBPassword=

line and edit by adding the password created in MySQL:


DBPassword=<span class="highlight">usr_strong_pwd</span>

Next, look for the

DBHost=

line and uncomment it.

Save and exit.

Configure PHP

We need to configure PHP for working with Zabbix. During the installation process, the installer created a configuration file in

/etc/zabbix

, named

apache.conf

. Open this file:

# $EDITOR /etc/zabbix/apache.conf

Here, right now, it’s necessary only to uncomment the

date.timezone

setting and set the correct timezone:


&lt;IfModule <strong>mod_php7</strong>.c&gt;
    php_value max_execution_time 300
    php_value memory_limit 128M
    php_value post_max_size 16M
    php_value upload_max_filesize 2M
    php_value max_input_time 300
    php_value always_populate_raw_post_data -1
    <strong><span class="highlight">php_value date.timezone Europe/Rome</span></strong>
&lt;/IfModule&gt;

Save and exit.

At this point, restart Apache and start the Zabbix Server service, enabling it for starting at boot time:

# systemctl restart apache2
# systemctl start zabbix-server
# systemctl enable zabbix-server
Check the Zabbix status with

systemctl

:

# systemctl status zabbix-server

 

This command should output:

â zabbix-server.service - Zabbix Server
 Loaded: loaded (/lib/systemd/system/zabbix-server.service; enabled; vendor pr
 Active: active (running) ...

At this point, the server-side part of Zabbix has been correctly installed and configured.

Configure Zabbix Web Fronted

As mentioned in the introduction, Zabbix has a web-based front-end which we’ll use for visualizing collected data. However, this interface has to be configured.

With a web browser, go to URL

http://localhost/zabbix

.

Zabbix monitoring server Frontend Setup

Click on Next step

snapshot2

Be sure that all the values are Ok, and then click on Next step again.

Zabbix MySQL configurationInsert the MySQL zabbix user password, and then click on Next step.
Zabbix server details

 

Click on Next step, and the installer will show the following page with all the configuration parameters. Check again to ensure that everything is correct..

Zabbix pre-installation detailsZabbix installation finished
Click Next step to proceed to the final screen.

Click finish to complete the front-end installation. The default user name is Admin with zabbix as the password.

Getting Started with the Zabbix Server

Zabbix login interface

After logging in with the above mentioned credentials, we will see the Zabbix dashboard:
zabbix dashboard
Go on Administration -> Users for an overview about enabled accounts
Zabbix users
Create a new account by clicking on Create user
Zabbix User Creation
Click on Add in the Groups section and select one group
snapshot11
Save the new user credentials, and it will appear in the Administration -> Users panel.
Note that in Zabbix access rights to hosts are assigned to user groups, not individual users.

Conclusion

This concludes the tutorial for the Zabbix Server installation. Now, the monitoring infrastructure is ready to do its job and collect data about servers that need to be added in the Zabbix configuration.

Encryption: How To Secure an NGINX web server on Ubuntu 16.04

Let's Encrypt Encryption CA

What is Let’s Encrypt

Let’s Encrypt is a free certificate authority brought by the Internet Security Research Group (ISRG). It provides an easy and automated way to obtain free SSL/TLS certificates – a required step for enabling encryption and HTTPS traffic on web servers. Most of the steps in obtaining and installing a certificate can be automated by using a tool called Certbot.

In particular, this software can be used in presence of shell access to the server: in other words, when it’s possible to connect to the server through SSH.

In this tutorial we will see how to use

certbot

to obtain a free SSL certificate and use it with Nginx on an Ubuntu 16.04 server.

Install Certbot

The first step is to install

certbot

, the software client which will automate almost everything in the process. Certbot developers maintain their own Ubuntu software repository which contain software newer than those present in the Ubuntu repositories.

Add the Certbot repository:

# add-apt-repository ppa:certbot/certbot

Next, update the APT sources list:

# apt-get update

At this point, it is possible to install

certbot

with the following

apt

command:

# apt-get install certbot

Certbot is now installed and ready to use.

Obtain a Certificate

There are various Certbot plugins for obtaining SSL certificates. These plugins help in obtaining a certificate, while its installation and web server configuration are both left to the admin.

We will use a plugin called Webroot to obtain a SSL certificate.

This plugin is recommended in those cases where there is the ability to modify the content being served. There is no need to stop the web server during the certificate issuance process.

Configure NGINX

Webroot works by creating a temporary file for each domain in a directory called

.well-known

, placed inside the web root directory. In our case, the web root directory is

/var/www/html

. Ensure that the directory is accessible to Let’s Encrypt for validation. To do so, edit the NGINX configuration. With a text editor, open the

/etc/nginx/sites-available/default

file:

# $EDITOR /etc/nginx/sites-available/default

In this file, in the server block, place the following content:

 location ~ /.well-known {
    allow all;
 }

Save, exit and check the NGINX configuration:

# nginx -t

Without errors, it should display:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart NGINX:

# systemctl restart nginx

Obtain Certificate with Certbot

The next step is to obtain a new certificate using Certbot with the Webroot plugin. In this tutorial, we will secure (as example) the domain www.example.com. It is required to specify every domain that should be secured by the certificate. Execute the following command:


# certbot certonly --webroot --webroot-path=<span class="highlight">/var/www/html</span> <strong>-d</strong> <span class="highlight">www.example.com</span>

During the process, Cerbot will ask for a valid email address for notification purposes. It will also ask to share it with the EFF, but this is not required. After agreeing the Terms of Services, it will obtain a new certificate.

At the end, the directory

/etc/letsencrypt/archive

will contain the following files:

  • chain.pem: Let’s Encrypt chain certificate.
  • cert.pem: domain certificate.
  • fullchain.pem: combination of
    cert.pem

    and

    chain.pem

    .

  • privkey.pem: certificate’s private key.

Certbot will also create symbolic links to the most recent certificate files in

/etc/letsencrypt/live/<strong>domain_name</strong>/

. This is the path we will use in server configuration.

Configure SSL/TLS on NGINX

The next step is server configuration. Create a new snippet in the

/etc/nginx/snippets/

. A snippet is a part of a configuration file that can be included in virtual host configuration files. So, create a new file:

# $EDITOR /etc/nginx/snippets/secure-example.conf

The content of this file will be the directives specifying the locations of the certificate and key. Paste the following content:

ssl_certificate /etc/letsencrypt/live/domain_name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;

In our case,

domain_name

would be

example.com

.

Edit NGINX Configuration

Edit the default Virtual Host file:

# $EDITOR /etc/nginx/sites-available/default

As follows:

server {
 listen 80 default_server;
 listen [::]:80 default_server;
 server_name www.example.com
 return 301 https://$server_name$request_uri;

 # SSL configuration
 #
 listen 443 ssl default_server;
 listen [::]:443 ssl default_server;
 include snippets/secure-example.conf
 #
 # Note: You should disable gzip for SSL traffic.
 # See: https://bugs.debian.org/773332
...

This will enable encryption on NGINX.

Save, exit and check the NGINX configuration file:

# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart NGINX:

# systemctl restart nginx

Conclusion

Following all the steps above, at this point we have a secured NGINX-based web server, with encryption granted by Certbot and Let’s Encrypt. This is just a basic configuration, of course, and it’s possible to use many NGINX configuration parameters for personalizing everything, but that depends on specific web server requirements.

Container: Docker Compose on Ubuntu 16.04

docker compose logo

What is Docker Compose

Docker Compose is a tool for running multi-container Docker applications. To configure an application’s services with Compose we use a configuration file, and then, executing a single command, it is possible to create and start all the services specified in the configuration.

Docker Compose is useful for many different projects like:

  • Development: with the Compose command line tools we create (and interact with) an isolated environment which will host the application being developed.
    By using the Compose file, developers document and configure all of the application’s service dependencies.
  • Automated testing: this use case requires an environment for running tests in. Compose provides a convenient way to manage isolated testing environments for a test suite. The full environment is defined in the Compose file.

Docker Compose was made on the Fig source code, a community project now unused.

In this tutorial we will see how to install Docker Compose on an Ubuntu 16.04 machine.

Install Docker

We need Docker in order to install Docker Compose. First, add the public key for the official Docker repository:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add-

Next, add the Docker repository to

apt

sources list:

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the packages database and install Docker with

apt

:

$ sudo apt-get update
$ sudo apt install docker-ce

At the end of the installation process, the Docker daemon should be started and enabled to load at boot time. We can check its status with the following command:

$ sudo systemctl status docker
---------------------------------

● docker.service - Docker Application Container Engine
 Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
 Active: active (running) 

Install Docker Compose

At this point it is possible to install Docker Compose. Download the current release by executing the following command:


# curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` &gt; /usr/local/bin/docker-compose

Make the downloaded binary executable:

# chmod +x /usr/local/bin/docker-compose

Check the Docker Compose version:

$ docker-compose -v

The output should be something like this:

docker-compose version 1.14.0, build c7bdf9e

Testing Docker Compose

The Docker Hub includes a Hello World image for demonstration purposes, illustrating the configuration required to run a container with Docker Compose.

Create a new directory and move into it:

$ mkdir hello-world
$ cd hello-world

Create a new YAML file:

$ $EDITOR docker-compose.yml

In this file paste the following content:

unixmen-compose-test:
 image: hello-world

Note: the first line is used as part of the container name.

Save and exit.

Run the container

Next, execute the following command in the

hello-world

directory:

$ sudo docker-compose up

If everything is correct, this should be the output shown by Compose:

Pulling unixmen-compose-test (hello-world:latest)...
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
Status: Downloaded newer image for hello-world:latest
Creating helloworld_unixmen-compose-test_1 ... 
Creating helloworld_unixmen-compose-test_1 ... done
Attaching to helloworld_unixmen-compose-test_1
unixmen-compose-test_1 | 
unixmen-compose-test_1 | Hello from Docker!
unixmen-compose-test_1 | This message shows that your installation appears to be working correctly.
unixmen-compose-test_1 | 
unixmen-compose-test_1 | To generate this message, Docker took the following steps:
unixmen-compose-test_1 | 1. The Docker client contacted the Docker daemon.
unixmen-compose-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
unixmen-compose-test_1 | 3. The Docker daemon created a new container from that image which runs the
unixmen-compose-test_1 | executable that produces the output you are currently reading.
unixmen-compose-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
unixmen-compose-test_1 | to your terminal.
unixmen-compose-test_1 | 
unixmen-compose-test_1 | To try something more ambitious, you can run an Ubuntu container with:
unixmen-compose-test_1 | $ docker run -it ubuntu bash
unixmen-compose-test_1 | 
unixmen-compose-test_1 | Share images, automate workflows, and more with a free Docker ID:
unixmen-compose-test_1 | https://cloud.docker.com/
unixmen-compose-test_1 | 
unixmen-compose-test_1 | For more examples and ideas, visit:
unixmen-compose-test_1 | https://docs.docker.com/engine/userguide/
unixmen-compose-test_1 | 
helloworld_unixmen-compose-test_1 exited with code 0

Docker containers only run as long as the command is active, so the container will stop when the test finishes running.

Conclusion

This concludes the tutorial about the installation of Docker Compose on an Ubuntu 16.04 machine. We have also seen how to create a simple project through the Compose file in YAML format.

Tutorial: How To Install Debian 9 ‘Stretch’

Debian 9 'Stretch' Tutorial

Introduction

After 26 months of development the Debian project released its new stable version, Debian 9, code name ‘Stretch’. Thanks to the work of the Debian Security team and the Debian Long Term Support this new version will be supported for 5 years.

Debian 9 is dedicated to the project’s founder, who passed away on 28 December 2015.

What’s new in Debian 9

In Debian 9, Firefox and Thunderbird replace their debranded versions Iceweasel and Icedove, present in the archive for more then 10 years.

Thanks to the Reproducible Builds almost every source package included in Debian 9 will build bit-for-bit identical binary packages and this protects users from malicious attempts to tamper with build networks and compilers.

Goals

In this tutorial we will see how to complete a minimal installation of Debian 9 for server purposes, using a netinstall image.

Debian installation

After booting from the media containing Debian 9, choose the installation method.

boot

In this guide we will use the classic textual “Install”.

Language selection

Select the language to be used during the installation process.

language

Next, select your location.

location location2 location3

Select locales

In some case, there is no locale predefined for the combination of country and language chosen, so you’ll be required to select one.

locale

Next, you can choose the keyboard layout.

After that, the system will load additional components. This is a very fast process.

additionalcomponents

Configuring Hostname

Insert the hostname to be used, as in the following screenshot:

hostname

Configuring Domain Name

After the hostname, let Debian know what the domain name is. In case of home network it is not required to insert a domain, so we’ll leave it blank.

domain configuration

New Root Password

Enter a new password for the root account. Use one secure enough, because, as every GNU/Linux user knows, this is the most important password in the system.

Root password Root password verification

Create a new user

Create a new, non-root user.

User creation - Step 1 User creation - Step 2

Verify the password, and wait.

Partitions Creation

At this point, the next step is disk preparation. We need to manage the partition scheme. Debian automates almost everything. In fact, we can choose the “guided” option using LVM, which has some advantages, for example, taking advantage of opportunities to add disks or resize partitions.

Guided partitioning

Next, select the disk that will contain Debian. In our case it’s just a virtual machine.

Disk selection

Next, decide how to partition the selected disk.

Partitions

The installer will ask if we are satisfied with the chosen partitioning scheme. This is the last chance to make changes before writing the changes to the disk.

Confirm partitioning

Base system installation

At this point the installation of the base system will start.

base system installation

Configure the package manager

Next step is to configure

apt

repositories. First, choose the mirror country.

apt mirror repositories

Select a mirror.

apt mirror

The installer will configure apt.

apt mirror configuration

Software selection

At the end of the mirror configuration process, it’s time to choose what “software collection” you want to install. In our case, we will select a basic system with SSH server support.

software collection selection

The installer will download and install selected software.

GRUB installation

Install GRUB boot loader, and wait for the process to finish.

grub

At the end, just reboot in the newly installed system.

Conclusion

In this guide we have seen how to install the latest version of Debian, “Stretch”. Debian is one of the oldest GNU/Linux distributions, first released in 1993.

In future tutorials we will use Debian 9 as the server for different platforms and services.

Analytics Engines: Elasticsearch 5.4 on Ubuntu 16.04

elasticsearch analytics engine logo

Elasticsearch – A Distributed Analytics Engine

Elasticsearch is an open-source, highly scalable, full-text search and analytics engine. It is part of a full stack called Elastic Stack. It allows you to store and analyze data, even in big volumes, with near real time performances. This powerful analytics engine supports RESTful operations, so it is possible to use all the HTTP methods in combination with HTTP URIs for data management. Another advantage is the option to use different programming languages with Elasticsearch, such as Python or JavaScript.

An online web store is a great example of a project that could benefit from Elasticsearch. It is possible to use Elasticsearch to store the entire product catalog and inventory, providing ‘search’ and ‘autocomplete suggestions’ functionalities.

Elastisearch’s great scalability also permits it to run on a laptop or on a cluster of servers with petabytes of data.

Goals

In this tutorial we will see how to install Elasticsearch on a server running Ubuntu 16.04.

Prerequisites

  • One server running Ubuntu 16.04.
  • Oracle JDK 8 installed on the server.

Install Elastisearch

Elasticsearch is provided in different formats:

.zip

,

.tar.gz

,

.deb

,

.rpm

,

docker

. In this guide we will utilize the

.deb

package.

Import Elasticsearch Key

Download and install the Elasticsearch public signing key by executing the following command:

$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Install Elasticsearch from the Repository

Before proceeding with the installation process, we’ll need to install the

apt-transport-https

package:

$ sudo apt-get install apt-transport-https

Next, we’ll save the repository definition with the following command:

$ echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Install Elasticsearch with

apt

:

$ sudo apt-get update && sudo apt-get install elasticsearch

Enable Elasticsearch for starting at boot time:

$ sudo systemctl enable elasticsearch

Configure Elasticsearch

Elasticsearch configuration files are store in the

/etc/elasticsearch

directory. In particular, the main configuration files are:

  • elasticsearch.yml

    for configuring the server side part of this powerful analytics engine.

  • log4j2.properties

    for configuring logging.

Configuration files use the YAML format.

Elasticsearch requires very little configuration, however there are a number of settings which should be configured before launching into use.

Open the

elasticsearch.yml

configuration file with a text editor:

$ sudo vim /etc/elasticsearch/elasticsearch.yml

Here, search for

cluster.name

variables.

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#

Uncomment the bolded line and change

my-application

with a cluster name, for example:

cluster.name: MyCluster

Note: a node can join a cluster when it shares its cluster.name with all the other nodes of the cluster. Be sure that

cluster.name

describes the cluster’s purpose.

Next, change the

node.name

variable. As above, uncomment the line and change its value:

# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#

These are the minimum settings required for running Elasticsearch. Of course, there are more details to work out in order to deploy this system on a cluster of servers.

Save and close the file, then start Elasticsearch:

$ sudo systemctl start elasticsearch

Testing Elasticsearch

We can test Elasticsearch by executing the following command:

$ curl -X GET 'http://localhost:9200'

It should display something like this:

{
 "name" : "node-1",
 "cluster_name" : "MyCluster",
 "cluster_uuid" : "WqXLC-cUT5-bSVyisNRIgQ",
 "version" : {
 "number" : "5.4.1",
 "build_hash" : "2cfe0df",
 "build_date" : "2017-05-29T16:05:51.443Z",
 "build_snapshot" : false,
 "lucene_version" : "6.5.1"
 },
 "tagline" : "You Know, for Search"
}

This means that Elasticsearch is running correctly.

Conclusion

This concludes the basic Elasticsearch configuration. Look how easy it can be to install this analytics engine on an Ubuntu 16.04 server!

Logs Management: Graylog 2 on Ubuntu 16.04

Graylog: a powerful logs management system

Graylog is an open source logs management system which parses and enriches log messages, wire and event data from any source, thus providing a centralized configuration management system for third-party collectors, like fluentd, beats and nxlog. For example, with Graylog it is possible to enrich log messages with geo-coordinates translated from IP addresses, or to map a user ID to a user name.

Features

A few of Graylog’s most notable features include:

  • Central logs management system, which gives your team access to runtime configuration and log data without touching the Graylog servers.
  • Grouping users into roles to simplify permissions management. Graylog has a very powerful system for restricting data access to users, which can really come in handy.
  • LDAP integration.
  • REST API usage for programatically log data accessing.

Goals

This tutorial will cover Graylog installation and basic configuration on a machine running Ubuntu 16.04.

Prerequisites

  • One server running Ubuntu 16.04 with at least 2 GB of RAM.
  • MongoDB.
  • Elasticsearch 2.x.
  • Oracle JDK 8.

Getting started

If your system matches the above listed prerequisites, you can start the Graylog 2 installation process.

Keep server update:

$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt-get install apt-transport-https uuid-runtime pwgen

Configure Elasticsearch

As noted above, Graylog 2.0.0 (and higher) requires Elasticsearch 2.x. You’ll want to modify the Elastisearch configuration file: 

<span class="pre">/etc/elasticsearch/elasticsearch.yml</span>

, setting the cluster name so that the it matches the one set in the Graylog configuration file. In this tutorial, the cluster name chosen is graylog.

With a text editor, open the Elasticsearch configuration file:

$ sudo $EDITOR /etc/elasticsearch/elasticsearch.yml

Search for the 

cluster.name

line and uncomment it. Next, modify as follows:

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
 cluster.name: graylog
#

Save and close the file, then restart the Elastisearch service:

$ sudo systemctl restart elasticsearch

Install Graylog

Once the server configuration is complete, we can move onto Graylog installation. Configure the Graylog repository with the following commands:

$ wget https://packages.graylog2.org/repo/packages/graylog-2.2-repository_latest.deb
$ sudo dpkg -i graylog-2.2-repository_latest.deb

Next, install the package:

$ sudo apt-get update && sudo apt-get install graylog-server

At the end of installation process, enable Graylog to start at boot time:

$ systemclt enable graylog-server

Before starting Graylog, we’ll need to configure it.

Configure Graylog

The Graylog configuration file is

/etc/graylog/server/server.conf

. Well have to edit some parameters in this file before we can start the logs management program.

First of all, we need to set the

password_secret

value. This must be at least 64 characters long. We will generate it using pwgen.

You can install this tool with apt:

$ sudo apt-get install pwgen

Next, using sed, we write the generated characters directly into the Graylog configuration file:

$ sudo -E sed -i -e "s/password_secret =.*/password_secret = $(pwgen -N 1 -s 128)/" /etc/graylog/server/server.conf

In order to check that everything was done correctly, use the following code:

$ cat /etc/graylog/server/server.conf | grep password_secret

The command should display the

password_secret

line. In our case:

password_secret = hjg5nBbZQcgLVW3do5uw1irfbq9UiRwhISZgPie8r96dejt4hgWdHUJcIaK1onQfFFatbrPZ3WV4yEhoqX9ITtaEUmn9SKn2aRT62uCO9KRZGK81q2xrO1aMQnOELPqP

The next step is to set the

<span class="pre">root_password_sha2</span>

, which is the SHA-256 hash of our desired password. First, execute the following command:


$  sudo sed -i -e "s/root_password_sha2 =.*/root_password_sha2 = $(echo -n '<strong>your_<span class="highlight">password</span></strong>' | shasum -a 256 | cut -d' ' -f1)/" /etc/graylog/server/server.conf

To be able to connect to Graylog, we must also configure the

<span class="pre">rest_listen_uri</span>

and

<span class="pre">web_listen_uri</span>

values to hostname or the public IP address of the machine we can connect to. Web interface URI and REST API must be accessible by everyone using the web interface and this means that Graylog must listen on a public network interface.

Open the Graylog configuration file:

$ sudo $EDITOR /etc/graylog/server/server.conf

In this file, search for the

rest_listen_uri

line, which, by default, is:

# REST API listen URI. Must be reachable by other Graylog server nodes if you run a cluster.
# When using Graylog Collectors, this URI will be used to receive heartbeat messages and must be accessible for all collectors.
rest_listen_uri = http://127.0.0.1:9000/api/

Replace the

127.0.0.1

with the server public IP.

Next, search for the

web_liste_uri

line:

# Web interface listen URI.
# Configuring a path for the URI here effectively prefixes all URIs in the web interface. This is a replacement
# for the application.context configuration parameter in pre-2.0 versions of the Graylog web interface.
#web_listen_uri = http://127.0.0.1:9000/

Uncomment it, and change the IP, just as you did in the

rest_listen_api

step.

Save and close the file, then start Graylog:

$ sudo systemctl start graylog-server

Check the Graylog status using the following code:

$ sudo systemctl status graylog-server
graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vend
   Active: active (running) ...

Testing Graylog

With a web browser on a client, go to 

http://192.168.1.138:9000/

. The browser will show a login page

graylog logs management system login page

Use admin as user name, and the password entered in the configuration step (‘your_password‘).

After logging in, you’ll see a ‘Getting Started’ page:

graylog logs management system getting started page

By going on System > Inputs, we have access to inputs configuration.

graylog logs management system inputs configuration page

This is where all inputs will be configured, which is the first step of data collecting in Graylog for logs management.

Conclusion

At this point we have a Graylog server correctly up and running on an Ubuntu 16.04 machine. In a future guide we will see how to configure inputs and send data from a server to this powerful logs management system.

We will also see how to configure a multi-node Graylog system, for more advanced logs management.