Home Blog Page 16

How to deploy Rocket.Chat on AWS – Part II

Rocket.Chat on AWS

Introduction

This is the second part of the tutorial on how to deploy Rocket.Chat on AWS (Amazon Web Services). In part one we saw how to configure an instance, get a SSL certificate and configure Route 53. Now, it’s time to set up NGINX, Docker and finally Rocket.Chat.

Install and configure NGINX

On the EC2 instance, install NGINX, which is available in the Ubuntu repositories:

# apt install nginx

Then configure it. To do this, first make a backup of the default configuration files:

# cd /etc/nginx/sites-available
# mv default default.backup

Next, create a new one:

# $EDITOR /etc/nginx/sites-available/default

In that, paste the following content:


server {
   listen 443 ssl;
   <strong>server_name mydomain.com;</strong>
<strong>   ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
   ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
</strong>   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   ssl_prefer_server_ciphers on;
   ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
   root /usr/share/nginx/html;
   index index.html index.htm;
   # Make site accessible from http://localhost/
   server_name localhost;
   location / {
     proxy_pass http://localhost:3000/;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection "upgrade";
     proxy_set_header Host $http_host;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
     proxy_set_header X-Forward-Proto http;
     proxy_set_header X-Nginx-Proxy true;
     proxy_redirect off;
   }
 }
 server {
   listen 80;
   <strong>server_name mydomain.com;
</strong>   return 301 https://$host$request_uri;
 }

This removes the listening on port 80, and the traffic is redirected on port 443, with SSL offering a secure connection. Lines 4 and 5 are the paths to the certificate and certificate key generated in the previous part of this tutorial.

In the 

 location

 section,  NGINX is configured as reverse proxy to forward to port 3000, which is the one used by Rocket.Chat.

Save, exit and stop NGINX:

# service nginx stop

Test NGINX with:

# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now it’s time to start the web server:

# service nginx start

Go with a web browser to mydomain.com. A page displaying a 502 Bad Gateway will appear, this is ok! The important part is to check in the address bar if there is a lock, which means that the connection is correctly secured by Let’s Encrypt Authority X1. The certificate will expire in 90 days, so remember to renew it.

Install Docker

On the instance, install Docker and its dependencies by executing the following command:

# sudo wget -qO- https://get.docker.com/ | sh

Next, to use Docker as the non-root user, add the ubuntu user to the docker group:

# usermod -aG docker ubuntu

Next, install Docker Compose:

# curl -L https://github.com/docker/compose/releases/download/1.4.2/docker-compose-Linux-x86_64 > /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose

Set up containers

First of all, create the following directories:

# mkdir -p /var/www/rocket.chat/data/runtime/db
# mkdir -p /var/www/rocket.chat/data/dump

Next, create a new compose configuration file:

# $EDITOR /var/www/rocket.chat/docker-compose.yml

In that file, paste the following content:


db:
  image: mongo:3.0
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - <strong>ROOT_URL=https://mydomain.com</strong>
  links:
    - db:db
  ports:
    - 3000:3000

Configure Upstart

Let’s use Upstart to manage MongoDB and Rocket.Chat start and restart services. Create a new file for MongoDB:

# $EDITOR /etc/init/rocket_chat_mongo.conf

In it, paste:


description "MongoDB service for Rocket.Chat"

# Start MongoDB after docker is running
start on (started docker)
stop on runlevel [!2345]

# Automatically Respawn with finite limits
respawn
respawn limit 99 5

# Path to our app
chdir /var/www/rocket.chat

script
   # Showtime
   exec /usr/local/bin/docker-compose up db
end script

Save, exit, and make the same for Rocket.Chat:

# $EDITOR /etc/init/rocket_chat.conf

Pasting there:


description "Rocket.Chat service manager"

# Start Rocket.Chat only after mongo job is running
start on (started rocketchat_mongo)
stop on runlevel [!2345]

# Automatically Respawn with finite limits
respawn
respawn limit 99 5

# Path to our app
<strong>chdir /var/www/rocket.chat</strong>

script
   # Bring up Rocket.Chat app
   exec /usr/local/bin/docker-compose up rocketchat
end script

Save and exit.

Conclusion

Restart the server; after the new logging in. Docker should download and set up images. After a few minutes, these can be seen with the following command:

# docker ps -a

Or, looking at the Upstart jobs log files:

# cat /var/log/upstart/rocket_chat_mongo.log
# cat /var/log/upstart/rocket_chat.log

From here you can use any web browser to go to mydomain.com, create a new admin user and start using Rocket.Chat.

Install – Configure WordPress with NGINX and HHVM

WordPress with NGINX and HHVM

Introduction

HipHop Virtual Machine (HHVM) is an open-source virtual machine designed for executing programs written in Hack and PHP. It uses a just-in-time compilation to provide better performance and supports PHP 5 and major features of PHP 7. This tutorial explains how to set up and run WordPress with NGINX and HHVM on openSUSE 42.2 Leap.

Getting started

Install NGINX

NGINX is available on openSUSE repositories, so just use 

zypper

to install it:

# zypper in nginx

Start it with 

systemd

:

# systemctl start nginx

Check if everything is going well using the following code:


# systemctl status nginx
nginx.service - The nginx HTTP and reverse proxy server 
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disab
led) 
   Active: active (running) since Mon 2017-03-13 13:49:14 CET; 26s ago

Install and configure MariaDB

As for NGINX, MariaDB is available in repositories. To start:

# zypper in mariadb mariadb-client

Fire it up and verify with 

systemd

:

# systemctl start mysql
# systemctl status mysql
 mysql.service - MySQL server 
   Loaded: loaded (/usr/lib/systemd/system/mysql.service; disabled; vendor preset: disab
led) 
   Active: active (running) since Mon 2017-03-13 13:56:27 CET; 5s ago

Configure its root account:

# mysql_secure_installation
New password:  
Re-enter new password:  
Password updated successfully! 
Reloading privilege tables.. 
 ... Success! 
 
Remove anonymous users? [Y/n] Y 
 ... Success! 
 
Disallow root login remotely? [Y/n] Y 
 ... Success! 
 
Remove test database and access to it? [Y/n] Y 
 - Dropping test database... 
 ... Success! 
 - Removing privileges on test database... 
 ... Success! 
 
Reload privilege tables now? [Y/n] Y 
 ... Success! 
 
Cleaning up... 
 
All done!  If you've completed all of the above steps, your MariaDB 
installation should now be secure. 
 
Thanks for using MariaDB!

Create new user and database for WordPress. Login to MariaDB shell:

# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE wordpress_db; 
MariaDB [(none)]> CREATE USER wordpressusr@localhost IDENTIFIED BY 'usr_strong_password'
; 
MariaDB [(none)]> GRANT ALL PRIVILEGES ON wordpress_db.* TO wordpressusr@localhost IDENT
IFIED BY 'usr_strong_password'; 
MariaDB [(none)]> FLUSH PRIVILEGES; 
MariaDB [(none)]> EXIT; 

Now, MariaDB is correctly installed and configured, and a new database for WordPress has been created. Take note of its informations.

Install HHVM

To install HHVM, first you must add a repository:

# zypper addrepo http://download.opensuse.org/repositories/home:munix9/openSUSE_Leap_42.2/home:munix9.repo

Then refresh repositories with:

# zypper refresh

Enter ‘a’ to trust the key:

 Repository:       munix9's Home Project (openSUSE_Leap_42.2)               
  Key Name:         home:munix9 OBS Project <home:munix9@build.opensuse.org> 
  Key Fingerprint:  3FE0C0AC 1FD6F103 4B818A14 BDF3F6AC D4D81407             
  Key Created:      Tue Aug 16 11:47:54 2016                                 
  Key Expires:      Thu Oct 25 11:47:54 2018                                 
  Rpm Name:         gpg-pubkey-d4d81407-57b2e14a                             
 
 
Do you want to reject the key, trust temporarily, or trust always? [r/t/a/? shows all op
tions] (r): a 
Retrieving repository 'munix9's Home Project (openSUSE_Leap_42.2)' metadata ......[done] 
Building repository 'munix9's Home Project (openSUSE_Leap_42.2)' cache ...........[done]

Now, install HHVM:

# zypper in hhvm hhvm-nginx hhvm-fastcgi

Configure HHVM and NGINX

Create a user and a group named nginx:

# useradd -M -s /bin/bash -U nginx

Next, you’ll need to configure HHVM for running with NGINX as the web server. To do this, edit the following file:

# $EDITOR /etc/hhvm/server.ini

Uncommenting line 3, Unix sockets will be enabled for HHVM. After the modification, the content of the file should look like this:

; some settings already defined in hhvm.service 
;hhvm.server.port = 9000 
hhvm.server.file_socket = /run/hhvm/server.sock 
hhvm.server.type = fastcgi 
hhvm.server.default_document = index.php 
; if you use mod_rewrite or get a 404 not found please try adding: 
;hhvm.server.fix_path_info = true 
 
hhvm.log.use_log_file = true 
hhvm.log.file = /var/log/hhvm/server.log 
 
hhvm.repo.central.path = /var/lib/hhvm/hhvm.hhbc 
;hhvm.repo.central.file_mode = 420 
;hhvm.repo.central.file_user = hhvm 
;hhvm.repo.central.file_group = hhvm

Next, edit the HHVM service, and change the user in nginx:

# $EDITOR /usr/lib/systemd/system/hhvm.service

[Unit] 
Description=HipHop Virtual Machine (FCGI) 
After=syslog.target network.target 
Before=apache2.service nginx.service 
 
[Service] 
PrivateTmp=true 
PIDFile=/run/hhvm/server.pid 
ExecStartPre=/bin/rm -f /run/hhvm/server.sock 
ExecStart=/usr/bin/hhvm --config /etc/hhvm/php.ini --config /etc/hhvm/server.ini --user 
nginx --mode server -vServer.Type=fastcgi -vServer.FileSocket=/run/hhvm/server.sock -vPi
dFile=/run/hhvm/server.pid 
 
[Install] 
WantedBy=multi-user.target 

Save, exit, and change the owner of 

/var/run/hhvm

:

#chown -R nginx:nginx /var/run/hhvm/

Next, configure HHVM to work with NGINX.

First, execute the following command:

# cp /etc/nginx/hhvm.conf.example /etc/nginx/hhvm.conf

Then, edit this configuration file:

# $EDITOR /etc/nginx/hhvm.conf

As follows:

location ~ \.(hh|php)$ { 
    root  /srv/www/htdocs; 
    fastcgi_keep_conn on; 
    #fastcgi_pass   127.0.0.1:9000; 
    fastcgi_pass   unix:/run/hhvm/server.sock; 
    fastcgi_index  index.php; 
    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name; 
    include        fastcgi_params; 
}

Next, edit the NGINX configuration file, 

/etc/nginx/nginx.conf

. On line 59, add:

include /etc/nginx/nginx.conf;

Save, exit, start HHVM and restart NGINX:

# systemctl start hhvm
# systemctl restart nginx

Test NGINX:

# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now test HHVM. To accomplish this task, create a new file called 

info.php

on 

/srv/www/htdocs

and write the following code into it:

&lt;?php phpinfo(); ?&gt;

Save and exit. With a web browser, go to http://localhost/info.php and it should display the following page:

Screenshot_20170313_143508

Create a new Virtual Host

Now it’s time to create a new Virtual Host file for NGINX that will be stored in the folder 

/etc/nginx/vhosts.d/

. Create it:

# mkdir -p /etc/nginx/vhosts.d/

Inside it, create a new file:

# $EDITOR /etc/nginx/vhosts.d/mysite.conf

And paste the following configuration:

server {
	# This line for redirect non-www to www
	server_name  mywebsite.co;
	rewrite ^(.*) http://www.mywebsite.co$1 permanent;
}

server {

        listen   80;
        server_name www.mywebsite.co;
        root /srv/www/mysite; 
        index index.php index.html index.htm;

        location / {
                try_files $uri $uri/ =404;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
              root /srv/www/htdocs;
        }

        # HHVM running throught Unix-Socket
	location ~ \.(hh|php)$ {
    		root   /srv/www/mysite;
    		fastcgi_keep_conn on;
    		#fastcgi_pass   127.0.0.1:9000;
    		fastcgi_pass   unix:/var/run/hhvm/server.sock;
    		fastcgi_index  index.php;
    		fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
   		include        fastcgi_params;
	}

}

Save and exit.

Create the web site root directory, specified in the Virtual Host configuration file:

# mkdir -p /srv/www/mysite

Install WordPress

Download WordPress in the site root directory:

# cd /srv/www/mysite
# wget wordpress.org/latest.zip

Next, extract the archive:

# unzip latest.zip
# mv wordpress/* .

Copy the sample configuration file and edit it:

# cp wp-config-sample.php wp-config.php
# $EDITOR wp-config.php

Edit the database lines as follows:

// ** MySQL settings - You can get this info from your web host ** // 
/** The name of the database for WordPress */ 
define('DB_NAME', 'wordpress_db'); 
 
/** MySQL database username */ 
define('DB_USER', 'wordpressusr'); 
 
/** MySQL database password */ 
define('DB_PASSWORD', 'usr_strong_password'); 
 
/** MySQL hostname */ 
define('DB_HOST', 'localhost');

Save and exit.

Conclusion

Now, just visit with the following URL with a web browser: www.mywebsite.co. WordPress should start, giving you the chance to finish configuration, and start using WP!

How to install – configure Ghost on openSUSE 42.2 Leap

Install - configure Ghost blog on openSUSE 42.2 Leap

What is Ghost?

In the Web 2.0 era, blogs are an important part of like for many people, and the popularity of WordPress and Tumblr, for instance, demonstrates fact.
Today we’ll talk about a platform for building and running online publications (like blogs, magazines, etc) called Ghost. This tool is open source and fully hackable, written in JavaScript and running on Node.js.
In this tutorial, we’ll see how to install – configure Ghost on openSUSE Leap 42.2, using Apache as the web server.

Node.js version

Ghost runs on top of Node.js. To be more exact, developers decided to support only LTS versions. In this tutorial we will be using version 4.2.x.
Even if this means that Ghost cannot use the latest features of Node.js, the choice seems reasonable as it leaves room for them to spend their time in building new features and fixing bugs rather than tracking changes in Node and having to retest their platform for every release.
For final users and admins, this results in a more stable and supported platform which many find appealing.

Getting started – Install Node.js and npm

Node.js version 4 is available already in openSUSE. To install it, just use

zypper

:

# zypper in nodejs
The following NEW package is going to be installed:
  nodejs4

1 new package to install.
Overall download size: 3.3 MiB. Already cached: 0 B. After the operation, additional
12.7 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package nodejs4-4.6.1-3.1.x86_64        (1/1),   3.3 MiB ( 12.7 MiB unpacked)
Retrieving: nodejs4-4.6.1-3.1.x86_64.rpm ...........................[done (846.4 KiB/s)]
Checking for file conflicts: .....................................................[done]
(1/1) Installing: nodejs4-4.6.1-3.1.x86_64 .......................................[done]

Next, install

npm

:

# zypper in npm

Check the version:

$ npm --version
2.15.9

Installing Ghost

Change the directory to

/srv/www

and download Ghost:

# cd /srv/www
# wget https://ghost.org/zip/ghost-latest.zip

Unzip it in a new directory named

ghost

 using the following code:

# unzip -d ghost ghost-latest.zip

Go to this new directory and install Ghost with

npm

:

# cd ghost
# npm install --production

Configure Ghost

The

ghost

directory contains an example of the configuration file. Use this.

# cp config.example.js config.js

Next, create a new user named ghostusr:

# useradd -d /srv/www -s /bin/bash -U ghostusr
# passwd ghostusr

Set this user to be the owner of the ghost directory:

# chown -R ghostusr:ghostusr /srv/www/ghost

Now, it’s possible to test Ghost with

npm

 by executing the following commands:

# su - ghostusr
$ cd ghost
$ npm start --production

It should result in output similar to this:

Migrations: Creating tables...
Migrations: Creating table: posts
Migrations: Creating table: users
Migrations: Creating table: roles
Migrations: Creating table: roles_users
Migrations: Creating table: permissions
Migrations: Creating table: permissions_users
Migrations: Creating table: permissions_roles
Migrations: Creating table: permissions_apps
Migrations: Creating table: settings
Migrations: Creating table: tags
Migrations: Creating table: posts_tags
Migrations: Creating table: apps
Migrations: Creating table: app_settings
Migrations: Creating table: app_fields
Migrations: Creating table: clients
Migrations: Creating table: client_trusted_domains
Migrations: Creating table: accesstokens
Migrations: Creating table: refreshtokens
Migrations: Creating table: subscribers
Migrations: Running fixture populations
Migrations: Creating owner
Ghost is running in production... 
Your blog is now available on http://my-ghost-blog.com

Open a new terminal window and test to see if Ghost is effectively running by executing the following command:

$ curl -I localhost:2368
HTTP/1.1 200 OK
X-Powered-By: Express
Cache-Control: public, max-age=0
Content-Type: text/html; charset=utf-8
Content-Length: 4554
ETag: W/"11ca-93Do3c+nffISfn1kLrmRZg"
Vary: Accept-Encoding
Date: Mon, 13 Mar 2017 07:59:39 GMT
Connection: keep-alive

In the terminal window running Ghost, stop it by entering CTRL+C.
Now, create a new systemd service:

# $EDITOR /etc/systemd/system/ghost.service

And paste the following configuration there:

[Unit]
Description=Ghost Blog - Publication platform
After=network.target

[Service]
Type=simple
# Ghost installation Directory
WorkingDirectory=/srv/www/ghost
User=ghostusr
Group=ghostusr
ExecStart=/usr/bin/npm start --production
ExecStop=/usr/bin/npm stop --production
Restart=always
SyslogIdentifier=Ghost

[Install]
WantedBy=multi-user.target

Reload systemd daemon:

# systemct daemon-reload

and then start the new service:

# systemctl start ghost

Check the status:

# systemctl status ghost

And it will show the folowing:

ghost.service - Ghost Blog - Publication platform
   Loaded: loaded (/etc/systemd/system/ghost.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2017-03-13 09:06:41 CET; 5s ago

To make it start up at boot time:

# systemctl enable ghost

Installing and configuring Apache

Install Apache 2 with

zypper

:

# zypper in apache2

And create a new Virtual Host for Ghost:

# $EDITOR /etc/apache2/sites-available/ghost.conf

There, paste:

 <VirtualHost *:80> 
    #Domain Name 
    ServerName myghostblog.com 
    ServerAlias www.myghostblog.com 
 
    #HTTP proxy/gateway server 
    ProxyRequests off  
    ProxyPass / http://127.0.0.1:2368/  
    ProxyPassReverse / http:/127.0.0.1:2368/      
</VirtualHost>

Save and exit.
Activate the proxy module, activate Ghost and restart everything:

# a2enmod proxy proxy_http
# ln -s /etc/apache2/sites-available/ghost.conf /etc/apache2/sites-enabled/ghost.conf
# systemctl restart apache2
# systemctl restart ghost

Open a web browser and visit http://localhost:2368, and:
Screenshot_20170313_092200
Ghost is up and running!

Enable SSL

Create a new directory which will contain certificates:

# mkdir -p /etc/apache2/certs

And generate a new certificate there:

# openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/apache2/certs/ghost.key -out /etc/apache2/certs/ghost.crt

Change permissions:

# chmod 600 /etc/apache2/certs/*

Finally, edit the Virtual Host configuration to enable SSL:

# $EDITOR /etc/apache2/sites-available/ghost.conf
<VirtualHost *:80>
    ServerName myghostblog.com
    ServerAlias www.myghostblog.com

    # Force http to https
    Redirect permanent / https://myghostblog.com/
#    ProxyRequests off 
#    ProxyPass / http://127.0.0.1:2368/ 
#    ProxyPassReverse / http:/127.0.0.1:2368/     
</VirtualHost>

<VirtualHost *:443>

   ServerName myghostblog.com

   SSLEngine on
   SSLCertificateFile /etc/apache2/certs/ghost.crt
   SSLCertificateKeyFile /etc/apache2/certs/ghost.key

   ProxyPass / http://127.0.0.1:2368/
   ProxyPassReverse / http:/127.0.0.1:2368/
   ProxyPreserveHost   On

   RequestHeader set X-Forwarded-Proto "https"

</VirtualHost>

Save, exit, and restart Apache:

# a2enmod ssl headers
# systemctl restart apache2

Conclusion

Ghost is now up and running. Go to localhost:2368/ghost/ to finish the set up of the admin account, and then start blogging!

How to deploy Rocket.Chat on AWS – Part I

Rocket.Chat on AWS

Introduction to Rocket.Chat

We have already talked about Rocket.Chat, a private messaging system like Slack and Mattermost, and how to install and configure it on a “local” server.
However, many infrastructures are based on Amazon Web Services, which offers a suite of cloud computing services. In particular the Amazon Elastic Compute Cloud, or Amazon EC2.
This series of tutorials explain how to deploy Rocket.Chat on Amazon Web Services (AWS), using Ubuntu 14.04 LTS as AMI.

Getting started

In AWS Services, go to EC2 > Instances > Launch Instance. There, select Ubuntu Server 14.04 LTS AMI.
For Instance Type, choose t2.micro. The T2 instances are useful, in particular, for websites and web applications, build servers, micro services, or test and staging environments.
Screenshot_20170311_132200
For Rocket.Chat, t2.micro seems a good choice.
Next, adjust the storage size, or add a second encrypted volume, and click on Next. In ‘Tag Instance’ add a Value to the Name Key and go on.
Next, it’s necessary to configure a security group. Right now, be sure to allow traffic to the server on port 80 until the SSL certificate is created. After that step, it’s possible to remove the group. Click on Launch, then create a new key pairs and launch a new instance.

Allocate an IP

In AWS Services, go to EC2 and the to ElasticIP. Here, allocate a new address and associate the instance launched in the previous steps.
In the details, it’s important to take note of the Public DNS value, because it will be needed later. This should be in the format

ec2-00-111-22-33.us-west-2.compute.amazonaws.com

.

Configure DNS using AWS Route 53

In AWS Services, you should always go to Route 53. Amazon Route 53 is a highly available and scalable cloud DNS web service, designed to provide a reliable and cost effective way to route end users to Internet applications. It is IPv6 compliant.
Using Amazon Route 53 Traffic Flow’s visual editor, it’s possible to manage how end-users are routed to application’s endpoints, whether in a single region or distributed. Amazon Route 53 also offers a domain name registration service.
Create a new Hosted Zone, entering a domain name and choosing Public Hosted Zone. Click on Create.
After selecting this new zone, create a record set and select as type

CNAME

. Then, enter the Public DNS name from the previous step to the value field and click on Create.

Get a SSL certificate

In 2017 it’s necessary to use HTTPS connections, and Chrome is still signalling services which don’t allow it. So, it’s important to get an SSL certificate.
For this task we will use Let’s Encrypt.
On the instance, clone the letsencrypt repository:

# git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Next, check on port 80:

# netstat -na | grep ':80.*LISTEN'

If any processes are listening, kill them all.
Next, to get the certificate, go to

/opt/letsencrypt

and run the standalone plugin, with the following commands:

# cd /opt/letsencrypt
# ./letsencrypt-auto certonly --standalone --email "email_address_here" -d "mydomain.com"

During this passage, letsencrypt will listen on port 80.
When this is finished, it’s possible to remove the previously created security group.
Now, it’s possible to check for certificate and keys. In particular, directory

/etc/letsencrypt/archive

should contain the following files:

  • cert.pem
  • chain.pem
  • fullchain.pem
  • privkey.pem
fullchain.pem

is the admin’s certificate file, while

privkey.pem

is his certificate key file.

Conclusions

With the previous steps, the first half of the journey is complete. It’s important to execute all steps carefully, because this is the basis on which Rocket.Chat will be deployed on an AWS Instance.

Install Magento on Ubuntu 16.04

Install Magento on Ubuntu 16.04

Introduction

Magento is an eCommerce open source software, and a content management system for eCommerce websites. It uses MySQL 5.6+ or MariaDB as possible databases, and it’s compatible with both NGINX and Apache web servers. It also requires PHP7+ and some of its extensions.
This tutorial demonstrates how to install Magento 2 on an Ubuntu 16.04 server using NGINX and MySQL.

Getting started

Update the server

First check for updates, and then install:

# apt update

Install NGINX

NGINX is available on Ubuntu repository, so you can easily install it with

apt

:

# apt install nginx

Install PHP

Since you are likely hoping that the site will be accessed by many visitors, it’s better to install PHP-FPM (FastCGI Process Manager), which has various features. So, install it and the extensions required by Magento:

# apt install php7.0-mcrypt php7.0-fpm php7.0-curl php7.0-mysql php7.0-cli php7.0-xsl php7.0-json php7.0-intl php7.0-dev php-pear php7.0-mbstring php7.0-common php7.0-zip php7.0-gd php-soap

Install 

curl

 as well:

# apt install curl libcurl3
Editing PHP settings

Modify the two configuration files:

  • /etc/php/7.0/fpm/php.ini

    – FPM configuration;

  • /etc/php/7.0/cli/php.ini

    – PHP-CLI configuration file;

In both, add the following lines (or edit if already existing):

memory_limit = 2G
max_execution_time = 3600
opcache.save_comments = 1
zlib.output_compression = On

Save, exit and restart PHP-FPM so that changes will be applied:

# systemct restart php7.0-fpm

Install and configure MySQL

To install MySQL, execute the following command:

# apt install -y mysql-server mysql-client

Next, set up a root account for it:

# mysql_secure_installation
Set root password? [Y/n]
New password:
Re-enter new password:
Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Remove test database and access to it? [Y/n]
Reload privilege tables now? [Y/n]

If importing a large numbers of products into Magento is required, just increase the value for the 

max_allowed_packet

option using the following code:

# $EDITOR /etc/mysql/mysql.cnf

There, search for the mentioned line, and modify as follow:

[mysql]
max_allowed_packet=64M

Save, exit and restart MySQL:

# systemctl restart mysql

Next, start a MySQL command prompt:

# mysql -u root -p

Create a new user and database:

mysql> CREATE DATABASE magento_db;
mysql> CREATE USER 'magentousr'@'localhost' IDENTIFIED BY 'user_strong_password';
mysql> GRANT ALL PRIVILEGES ON 'magento_db.*' TO 'magentousr'@'localhost' IDENTIFIED BY 'user_strong_password';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;

Downloading Magento

Magento 2 will be installed in the 

/var/www/magento2

directory. The installation requires PHP Composer. Install PHP Composer by executing the following command:

# curl -sS https://getcomposer.org/installer | php

Move the

composer.phar

file to

/usr/local/bin

:

mv composer.phar /usr/local/bin/composer

Test that everything is going smoothly so far with:

# composer -v

This should print out the composer version.

To obtain Magento (the Community Edition, in this tutorial), first go to https://www.magentocommerce.com/magento-connect/ and create an account there. Next, follow My Account > Developer > Secure Keys, and generate new keys.

Now it’s time to download Magento. On a terminal, execute the following command:

# composer create-project --repository-url=https://repo.magento.com/ magento/project-community-edition /var/www/magento2

During this process, use the public key for username and the private key for the password.

Configure a Virtual Host

Create a new Virtual Host file:

# $EDITOR /etc/nginx/sites-available/magento

In the Host File, paste the following configuration:

upstream fastcgi_backend {
        server  unix:/run/php/php7.0-fpm.sock;
}

server {

        listen 80;
        server_name www.myecommerce.com;
        set $MAGE_ROOT /var/www/magento2;
        set $MAGE_MODE developer;
        include /var/www/magento2/nginx.conf.sample;
}

Magento already contains an NGINX configuration file, so it’s not necessary to create one.

Save and exit.
Next, activate the virtual host:

# ln -s /etc/nginx/sites-available/magento /etc/nginx/sites-enabled/

Restart NGINX:

# systemctl restart nginx

Install Magento

In

/var/www/magento2

there is a binary file named magento. This will be used for installing Magento 2.
Execute:

# /var/www/magento2/bin/magento setup:install --backend-frontname="admin" \
--key="cja8Jadsjwoqpgk93670Dfhu47m7rrIp"\
--db-host="localhost" \
--db-name="magento_db" \
--db-user="magentousr" \
--db-password="usr_strong_password" \
--language="en_US" \
--currency="USD" \
--timezone="My/Timezone" \
--use-rewrites=1 \
--use-secure=0 \
--base-url="http://www.myecommerce.com" \
--base-url-secure="https://www.myecommerce.com" \
--admin-user=admin \
--admin-password=admin_password \
--admin-email=admin@myecommerce.com \
--admin-firstname=admin \
--admin-lastname=user \
--cleanup-database

At the end:

[SUCCESS]: Magento installation complete.
[SUCCESS]: Magento Admin URI: /admin

Final configuration

As usual in cases like this, the last step is a “graphical configuration” through a web browser. Go to www.myecommerce.com/admin, and log in with the admin credentials you created during the installation process.
After signing in the Magento Dashboard should appear, signifying that everything went well. The ecommerce web site is now ready to be filled with products!

Install Rocket.Chat on Ubuntu 16.04

Rocket.Chat on AWS

Introduction

Rocket.Chat is a messaging system for team communication, like Slack. It has various features, including:

    • Video conferences
    • Help desk chat
    • File sharing
    • Voice messages
    • Link previews

This tutorial explains how to install and configure your Rocket.Chat Server on a Ubuntu 16.04 system.

Getting started

First, install all dependencies required by Rocket.Chat:

# apt install graphicsmagick build-essential

Install MongoDB

This system requires MongoDB, so let’s install that first. MongoDB provides packages for Ubuntu LTS. First, add its keyserver:

# sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

Next, add the repository with:

# echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

Update apt repositories and install the database:

# apt update
# apt install mongodb-org

Open MongoDB and set it to run automatically at boot time, with systemd:

# systemctl start mongod
# systemctl enable mongod

Install Node.js and npm

Node.js and npm are required by Rocket.Chat and are both available on Ubuntu repositories. Install using the following codes:

# apt install nodejs
# apt install npm

Next, install

n

 (a tool which let admins change the node version) using npm:

# npm install -g n

The messaging system requires Node.js 4.5+, so ensure that you choose 4.5:

# n 4.5

Check that everything is working well so far using the following code:

# node --version

Configure MongoDB Replica set (OPTIONAL)

This is an optional step, but those who want performance improvements should follow it. Rocket.Chat Server uses a MongoDB replica set. The concept behind this replica is that it forms a group of MongoDB processes all working on the same data set, which provides high availability.
To enable these replicas, edit the mongod.conf configuration file:

# $EDITOR /etc/mongod.conf

There, add this section:

replication:
      replSetName:  "001-rs"

Save, exit and restart MongoDB:

# systemctl restart mongod

Next, run its shell and initiate the replica set:

# mongo
> rs.initiate()

The output should look something like this:

{
  "info2" : "no configuration explicitly specified -- making one",
  "me" : "localhost:27017",
  "info" : "Config now saved locally.  Should come online in about a minute.",
  "ok" : 1
}

Pay attention to the last line! It’s important that “ok” is exactly “1”. Any other number would mean that an error has occurred.
The prompt should become

001-rs:PRIMARY&gt;

, which signifies that MongoDB is using the replica set.

Exit and add the following environment variable to the system:

MONGO_OPLOG_URL=mongodb://localhost:27017/local?replicaSet=001-rs

This can be done, for example, editing the

~/.bashrc

file and adding the following:

export MONGO_OPLOG_URL=mongodb://localhost:27017/local?replicaSet=001-rs

Next, restart MongoDB:

# systemctl restart mongod

Installing Rocket.Chat

Now, it’s time to install Rocket.Chat, in the 

/var/www

directory. Download the latest version with:

# cd /var/www
# curl -L https://rocket.chat/releases/latest/download -o rocket.chat.tgz

Extract the archive:

# tar xzf rocket.chat.tgz

Rename the extracted folder:

# mv bundle Rocket.Chat

Next, set environment variables and run the Rocket.Chat server with the following commands:

# cd Rocket.Chat/programs/server
# npm install
# cd ../..

# export ROOT_URL=http://your-host-name.com-as-accessed-from-internet:3000/
# export MONGO_URL=mongodb://localhost:27017/rocketchat
# export PORT=3000

# node main.js

Those who are using the replica set should set the MONGO_URL variable with this content:

mongodb://localhost:27017/rocketchat?replicaSet=001-rs

Rocket.Chat is installed and configured, but it requires configuration behind a web server. In this tutorial we’ll be using NGINX.

Installing and configuring NGINX

Install the web server:

# apt install nginx

Create a new SSL directory, in which certificates will be stored:

# mkdir -p /etc/nginx/ssl/

In this directory, generate a new directory:

# cd /etc/nginx/ssl
# openssl req -new -x509 -days 365 -nodes -out /etc/nginx/ssl/rocketchat.crt -keyout /etc/nginx/ssl/rocketchat.key
# chmod 400 rocketchat.key

Next, create a Virtual Host configuration:

# $EDITOR /etc/nginx/sites-available/rocketchat

There, paste the following configuration:

# Upstreams
upstream backend {
    server 127.0.0.1:3000;
}
 
# Redirect Options
server {
  listen 80;
  server_name chat.mydomain.com;
  # enforce https
  return 301 https://$server_name$request_uri;
}
 
# HTTPS Server
server {
    listen 443;
    server_name chat.mydomain.com;
 
    error_log /var/log/nginx/rocketchat.access.log;
 
    ssl on;
    ssl_certificate /etc/nginx/ssl/rocketchat.crt;
    ssl_certificate_key /etc/nginx/ssl/rocketchat.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # dont use SSLv3 ref: POODLE
 
    location / {
        proxy_pass http://192.168.1.110:3000/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
 
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forward-Proto http;
        proxy_set_header X-Nginx-Proxy true;
 
        proxy_redirect off;
    }
}

Save, exit and activate this configuration:

# ln -s /etc/nginx/sites-available/rocketchat /etc/nginx/sites-enabled/rocketchat

Then, test NGINX:

# nginx -t

If no errors occur, restart NGINX:

# systemctl restart nginx

Update the environment variables and run Rocket.Chat:

#cd /var/www/Rocket.Chat/
# export ROOT_URL=https://chat.mydomain.com
# export MONGO_URL=mongodb://localhost:27017/rocketchat?replicaSet=001-rs
# export PORT=3000
# node main.js

The final step is to insert the following URL into a web browser: https://chat.mydomain.com to register a new admin account and finish the graphical configuration. At the end, the Rocket.Chat messaging system will be ready for daily usage.

Conclusion

There you have it! We’ve just explained how to install and configure your Rocket.Chat Server on a Ubuntu 16.04 system using NGINX. This useful online communication program can help your team work more efficiently and with more collaboration! Don’t forget that you can use various web servers for running Rocket.Chat, no need to stick with NGINX!

Install and configure ownCloud 9.1.4 on openSUSE Leap 42.2

ownCloud 9.1.4

Introduction

ownCloud is an open source file syncing and sharing software, just like Dropbox. Just placing files in a local shared directory, those files will be immediately synchronized to the server and to other devices using the ownCloud Desktop Sync Client, Android app, or iOS app.

This tutorial explains how to install and configure the server side of ownCloud on openSUSE 42.2.

Getting started

First of all, install SuSEfirewall2. This is a script that generates iptables rules from configurations stored in

/etc/sysconfig/SuSEfirewall2

. Install it with zypper:

# zypper in SuSEfirewall2

There is also a YaST configuration module, but it doesn’t permit you to configure all firewall settings, so it’s necessary to manually edit the configuration file:

# $EDITOR /etc/sysconfig/SuSEfirewall2

In there, search for

FW_SERVICES_EXT_TCP

line and change as follow:

FW_SERVICES_EXT_TCP="22 80 443"

These are: ssh, http, and https ports.
Save and exit.

Next, start it and enable it to start at boot time:

# systemctl start SuSEfirewall2
# systemctl enable SuSEfirewall2

Restart

sshd

:

# systemctl restart sshd

Install NGINX

NGINX is also available on openSUSE repositories, so:

# zypper in nginx

Start and enable it:

# systemct start nginx
# systemctl enable nginx

Installing MariaDB

As for NGINX, MariaDB is also available as openSUSE package, so:

# zypper in mariadb mariadb-client

Next:

# systemctl start mysqld
# systemctl enable mysqld

Configure its root account:

# mysql_secure_installation
Enter current password for root (enter for none):
Set root password? [Y/n]
New password:
Re-enter new password:
Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Reload privilege tables now? [Y/n]

Now it’s possible to log in to the MariaDB shell and create a new database and user that will be used for ownCloud:

# mysql -u root -p

In the database system shell:

mysql> CREATE DATABASE myownclouddb;
mysql> CREATE USER 'ocuser'@'localhost' IDENTIFIED BY 'user_strong_password';
mysql> GRANT ALL PRIVILEGES ON 'myownclouddb.*' TO 'ocuser'@'localhost' IDENTIFIED BY 'user_strong_password';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;

Now MariaDB is correctly configured for ownCloud.

Install PHP-FPM

ownCloud requires PHP 5.4+. Install PHP-FPM, which is a FastCGI alternative useful when handling sites with a lot of visitors. In this guide we’ll be using PHP7.
Through zypper:

# zypper in php7-fpm php7-gd php7-mysql php7-mcrypt php7-curl php7-pear php7-zip php7-json php7-ldap

Next, copy the default php-fpm configuration file, executing the following commands:

# cd /etc/php7/fpm
# cp php-fpm.conf.default php-fpm.conf

Open that file with a text editor:

# $EDITOR php-fpm.conf

There, look for (and modify as follows) the following lines:

error_log = log/php-fpm.log
user = nginx
group = nginx
listen = /var/run/php-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660

Save and exit.
Now, modify

php.ini

:

# $EDITOR /etc/php7/cli/php.ini

Uncomment line 761 and change its value:

cgi.fix_pathinfo=0

Save, exit and copy this file to

conf.d

:

# cp php.ini /etc/php7/conf.d/

The PHP7 session directory is

/var/lib/php7

. Change its owner to nginx user:

# chown -R nginx:nginx /var/lib/php7/
Configure NGINX to work with PHP-FPM

Create a new NGINX configuration file, making a backup of the old one:

# cd /etc/nginx
# cp nginx.conf nginx.conf.bk
# $EDITOR nginx.conf

On line 65, add the following configuration:

 location ~ \.php$ {
                root /srv/www/htdocs;
                try_files $uri =404;
                include /etc/nginx/fastcgi_params;
                fastcgi_pass unix:/var/run/php-fpm.sock;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
       }

Save, exit and test nginx:

# nginx -t

You should read the following lines:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

At the end:

# systemctl start php-fpm
# systemctl enable php-fpm
# systemctl restart nginx

Install ownCloud

Go to the web root directory, which is

/srv/www

, and there download ownCloud:

# wget https://download.owncloud.org/community/owncloud-9.1.4.tar.bz2

Extract the archive:

# tar xf owncloud-9.1.4.tar.bz2

In the

owncloud

extracted folder, create a new data directory, and change its owner to nginx user:

# mkdir owncloud/data
# chown -R nginx:nginx owncloud/
Configure a Virtual Host for ownCloud

Next step is to configure a Virtual Host in NGINX for ownCloud.

# mkdir /etc/nginx/vhosts.d && cd /etc/nginx/vhosts.d

There, create a new file:

# $EDITOR owncloud.conf

Paste the following content in that file:

upstream php-handler {
  #server 127.0.0.1:9000;
  server unix:/var/run/php-fpm.sock;
}

server {
  listen 80; # If you have a SSL certificate (Recommended), change this line with "listen 443 ssl;" and add certificate lines;
  server_name storage.mydomain.com;

  # Path to the root of your installation
  root /srv/www/owncloud/;
  # set max upload size
  client_max_body_size 10G;
  fastcgi_buffers 64 4K;

  # Disable gzip to avoid the removal of the ETag header
  gzip off;

  # Uncomment if your server is build with the ngx_pagespeed module
  # This module is currently not supported.
  #pagespeed off;

  rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
  rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
  rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

  index index.php;
  error_page 403 /core/templates/403.php;
  error_page 404 /core/templates/404.php;

  location = /robots.txt {
    allow all;
    log_not_found off;
    access_log off;
  }

  location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README){
    deny all;
  }

  location / {
    # The following 2 rules are only needed with webfinger
    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

    rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
    rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;

    rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

    try_files $uri $uri/ =404;
  }

  location ~ \.php(?:$|/) {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
    fastcgi_param HTTPS on;
    fastcgi_pass php-handler;
    fastcgi_intercept_errors on;
  }

  # Adding the cache control header for js and css files
  # Make sure it is BELOW the location ~ \.php(?:$|/) { block
  location ~* \.(?:css|js)$ {
    add_header Cache-Control "public, max-age=7200";
    # Add headers to serve security related headers
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    # Optional: Don't log access to assets
    access_log off;
  }

  # Optional: Don't log access to other assets
  location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|swf)$ {
    access_log off;
  }
}

Save, exit and restart services:

# systemctl restart nginx
# systemctl restart php-fpm
# systemctl restart mysql

Conclusions

The server side is now well configured. The last step is to go with a web browser to: http://storage.mydomain.com and finish a graphical configuration. At the end of this process your ownCloud Dashboard will be fully available!

Install and configure LMD and Clam AntiVirus on CentOS 7

LMD and Clam Antivirus

Introduction

Linux Malware Detect is a malware detector and scanner for GNU/Linux, designed particularly for shared hosting environments. It is released under the GNU GPLv2 license, and it supports installation on cPanel WHM and GNU/Linux environments concurrently with other detection softwares like ClamAV.
This one is an open source antivirus solution to detect trojans, malware, viruses and other malicious software that supports multiple platforms, including Windows, MacOS, and GNU/Linux.
This tutorials explains how to install LMD and Clam Antivirus on a CentOS 7 server.

Getting started – EPEL repository and Mailx

First of all, install the EPEL repository and

mailx

. The last one is a mail processing system, based on Berkeley Mail 8.1, and provides enhanced features for interactive use, such as caching and disconnected operation for IMAP, message threading, scoring, and filtering. It is also usable as a mail batch language, both for sending and receiving mail.
First, install EPEL:

# yum install epel-release

and then Mailx:

# yum install mailx

In this scenario, Mailx will be used by LMD for sending scan reports to your email address.

Install LMD

The package is not available in CentOS or EPEL, so a manual installation is required.
Download LMD and extract it:

# wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
# tar -xzvf maldetect-current.tar.gz

As root, run the installer script

install.sh

present in extracted directory:

# cd maldetect-x.x
# ./install.sh

Next, make a link to the maldet command in

/bin

:

# ln -s /usr/local/maldetect/maldet /bin/maldet
# hash -r

Configure LMD

LMD has been installed into

/usr/local/maldet/

. In that directory, there is a configuration file – we’re going to modify it:

# $EDITOR /usr/local/maldetect/conf.maldet

Enable email alerts by changing the value to 1 on line 16.

email_alert="1"

Then, search for the email address line, and modify it as follow:

email_addr="root@mydomain.me"
ClamAV clamscan binary will be used as default scan engine; that's because it provides a high-performance scan on large file sets.  To allow this, search and edit following line:
scan_clamscan="1"

Next, it’s possible to enable quarantining to move malware to the quarantine during the scan process. To do this, change the following line:

quarantine_hits="1"

Next, enable clean based malware injections by changing:

quarantine_clean="1"

That’s all for LMD configuration.

Install ClamAV

Now that LMD is correctly installed and configured, let’s install Clam AntiVirus to get the best scanning results. ClamAV is available in the EPEL repository.
So, using yum:

#yum install clamav clamav-devel

After ClamAV has been installed, update the ClamAV virus databases with

freshclam<code>:

# freshclam

Testing LMD and ClamAV

Now it’s possible to test LMD with a manual scan. To accomplish this task, execute

maldet

. With this command, scan

/var/www/html/

.
In the web root directory, download some sample malware with wget:

# cd /var/www/html
# wget http://www.eicar.org/download/eicar.com.txt
# wget http://www.eicar.org/download/eicar_com.zip
# wget http://www.eicar.org/download/eicarcom2.zip

Next, it’s possible to scan the web root directory, as previously said, with

maldet

:

# maldet -a /var/www/html

During this process, it’s possible to see that LMD is using the ClamAV scanner engine to perform the scan: it will find three malware hits.
Check the report with the following command:

# maldet --report SCANID

SCANID is a numerical value found in the Maldet output.

Next, verify that there is an email containing the report:

# tail -f /var/mail/root

If everything was well configured, that email should contain all the required information.

It’s also possible to acquire a list of all reports:

# maldet -e list

Or “filter” files to scan. For instance, to scan files modified in the last 10 days:

# maldet -r /var/www/html 10

For more information, just call the help with:

# maldet --help

that contains all options recognized by LMD.

There you go! That’s one great way to protect from web server infections on a GNU/Linux system.

How to install Seafile on CentOS 7

Seafile on CentOS 7

Introduction

Seafile is a private file hosting platform, similar to Dropbox, Google Drive, OneDrive and Mega. Its parts are released under open source licenses, in particular:

  • Seafile iOS client: Apache License v2
  • Seafile Android client: GPLv3
  • Desktop syncing client: GPLv2
  • Seafile Server core: AGPLv3
  • Seahub (Seafile server Web UI): Apache License v2

It supports file encryption and group sharing.

This tutorial explains how to install Seafile on CentOS 7 with NGINX as your web server and MariaDB as your database.

Getting started

First of all, Seafile is written in Python, so it requires the following dependencies:

# yum install python-imaging MySQL-python python-memcached python-ldap python-urllib3
Install and configure MariaDB

Install MariaDB; available on EPEL:

# yum install epel-release

then:

# yum install mariadb mariadb-server

At the end of this process, start the program and configure the MariaDB root account, executing:

# systemctl start mysqld

and

# mysql_secure_installation
Set root password? [Y/n]
New password:
Re-enter new password:
Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Remove test database and access to it? [Y/n]
Reload privilege tables now? [Y/n]

Seafile requires three different databases (one for each component) :

  • ccnet-db
  • seafile-db
  • seahub-db

So, create these databases and a user,

seauser

:

# mysql -u root -p

In the MariaDB shell:

mysql> CREATE DATABASE ccnet-db CHARACTER SET = 'utf8';
mysql> CREATE DATABASE seafile-db CHARACTER SET = 'utf8';
mysql> CREATE DATABASE seahub-db CHARACTER SET = 'utf8';
mysql> CREATE USER 'seauser'@'localhost' IDENTIFIED BY 'user_strong_password';
mysql> GRANT ALL PRIVILEGES ON ccnet-db TO seauser@localhost IDENTIFIED BY 'user_strong_password';
mysql> GRANT ALL PRIVILEGES ON seafile-db TO seauser@localhost IDENTIFIED BY 'user_strong_password';
mysql> GRANT ALL PRIVILEGES ON seahub-db TO seauser@localhost IDENTIFIED BY 'user_strong_password';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;
Install NGINX

Since the EPEL repository is available, it’s possible to install NGINX with yum:

# yum install nginx

Start it with systemd:

# systemctl start nginx.service

Create a user and a group, both named

nginx

:

# adduser --user-group --system --no-create-home nginx

Install and configure Seafile

Create a new directory:

# mkdir /var/www/seafile
# cd /var/www/seafile

There, download Seafile with

wget

:

# wget https://bintray.com/artifact/download/seafile-org/seafile/seafile-server_6.0.8_x86-64.tar.gz

Extract the archive:

# tar xf seafile-server_6.0.8_x86-64.tar.gz

Rename the extracted directory:

# mv seafile-server-6.0.8 seafile-server
# cd seafile-server

There is a script, named

setup-seafile-mysql.sh

in order to configure the database, execute it:

# ./setup-seafile-mysql.sh

It will ask for some information:

  • server name: myserver
  • server ip or domain: localhost
  • seafile data dir: press Enter, and it will use the current directory
  • fileserver port: Enter, and it should use 8082

Next, it will display the following:

-------------------------------------------------------
Please choose a way to initialize Seafile databases:
-------------------------------------------------------

[1] Create new ccnet/seafile/seahub databases
[2] Use existing ccnet/seafile/seahub databases

Chose option 2, and then:

  • use deafult host: localhost
  • default port: 3306
  • mysql user: ‘seauser’
  • password for Seafile mysql user: ‘user_strong_password’
  • ccnet database: ‘ccnet-db’
  • seafile database: ‘seafile-db’
  • seahub database: ‘seahub-db’

Next, the script will create required tables for Seafile.

Start Seafile and Seahub:

# ./seafile.sh start
# ./seahub.sh start

During execution, seahub.sh will ask for admin informations, particularly your email and password.

After this, Seafile will be running and it will be possible to access it with a web browser, at localhost:8000.
Next, you’ll need to configure NGINX as the reverse proxy. But first, it’s necessary to create a systemd service.

Configuring services

Change the Seafile installation directory and cache owner to user

nginx

:

# chown -R nginx:nginx /var/www/*
# chown -R nginx:nginx /tmp/seahub_cache

Then create a service:

# $EDITOR /etc/systemd/system/seafile.service

In this file, paste the following configuration:

[Unit]
Description=Seafile - the open source, self-hosted file sync
Before=seahub.service
After=network.target mariadb.service
 
[Service]
Type=oneshot
ExecStart=/var/www/seafile/seafile-server/seafile.sh start
ExecStop=/var/www/seafile/seafile-server/seafile.sh stop
RemainAfterExit=yes
User=nginx
Group=nginx
 
[Install]
WantedBy=multi-user.target

Save, exit and do the same with SeaHub:

# $EDITOR /etc/systemd/system/seahub.service

and paste:

[Unit]
Description=SeaHub
After=network.target seafile.target mariadb.service
 
[Service]
Type=oneshot
ExecStart=/var/www/seafile/seafile-server/seahub.sh start-fastcgi
ExecStop=/var/www/seafile/seafile-server/seahub.sh stop
RemainAfterExit=yes
User=nginx
Group=nginx
 
[Install]
WantedBy=multi-user.target

Save, exit and then:

# systemctl daemon-reload
# systemctl start seafile
# systemctl start seahub

Configure NGINX

Seafile is correctly running, now configure NGINX for running Seafile behind it. Create a new Virtual Host file:

# $EDITOR /etc/nginx/conf.d/seafile.conf

and there:



<span class="hljs-section">server</span> {
    <span class="hljs-attribute">listen</span> <span class="hljs-number">80</span>;
    <span class="hljs-attribute">server_name</span> seafile.mydomain.com;

    <span class="hljs-attribute">proxy_set_header</span> X-Forwarded-For <span class="hljs-variable">$remote_addr</span>;

    <span class="hljs-attribute">location</span> / {
        <span class="hljs-attribute">fastcgi_pass</span>    <span class="hljs-number">127.0.0.1:8000</span>;
        <span class="hljs-attribute">fastcgi_param</span>   SCRIPT_FILENAME     <span class="hljs-variable">$document_root</span><span class="hljs-variable">$fastcgi_script_name</span>;
        <span class="hljs-attribute">fastcgi_param</span>   PATH_INFO           <span class="hljs-variable">$fastcgi_script_name</span>;

        <span class="hljs-attribute">fastcgi_param</span>    SERVER_PROTOCOL        <span class="hljs-variable">$server_protocol</span>;
        <span class="hljs-attribute">fastcgi_param</span>   QUERY_STRING        <span class="hljs-variable">$query_string</span>;
        <span class="hljs-attribute">fastcgi_param</span>   REQUEST_METHOD      <span class="hljs-variable">$request_method</span>;
        <span class="hljs-attribute">fastcgi_param</span>   CONTENT_TYPE        <span class="hljs-variable">$content_type</span>;
        <span class="hljs-attribute">fastcgi_param</span>   CONTENT_LENGTH      <span class="hljs-variable">$content_length</span>;
        <span class="hljs-attribute">fastcgi_param</span>    SERVER_ADDR         <span class="hljs-variable">$server_addr</span>;
        <span class="hljs-attribute">fastcgi_param</span>    SERVER_PORT         <span class="hljs-variable">$server_port</span>;
        <span class="hljs-attribute">fastcgi_param</span>    SERVER_NAME         <span class="hljs-variable">$server_name</span>;
        <span class="hljs-attribute">fastcgi_param</span>   REMOTE_ADDR         <span class="hljs-variable">$remote_addr</span>;

        <span class="hljs-attribute">access_log</span>      /var/log/nginx/seahub.access.log;
        <span class="hljs-attribute">error_log</span>       /var/log/nginx/seahub.<span class="hljs-literal">error</span>.log;
        <span class="hljs-attribute">fastcgi_read_timeout</span> <span class="hljs-number">36000</span>;
    }

    <span class="hljs-attribute">location</span> /seafhttp {
        <span class="hljs-attribute">rewrite</span><span class="hljs-regexp"> ^/seafhttp(.*)$</span> <span class="hljs-variable">$1</span> <span class="hljs-literal">break</span>;
        <span class="hljs-attribute">proxy_pass</span> http://127.0.0.1:8082;
        <span class="hljs-attribute">client_max_body_size</span> <span class="hljs-number">0</span>;
        <span class="hljs-attribute">proxy_connect_timeout</span>  <span class="hljs-number">36000s</span>;
        <span class="hljs-attribute">proxy_read_timeout</span>  <span class="hljs-number">36000s</span>;
        <span class="hljs-attribute">proxy_send_timeout</span>  <span class="hljs-number">36000s</span>;
        <span class="hljs-attribute">send_timeout</span>  <span class="hljs-number">36000s</span>;
    }

    <span class="hljs-attribute">location</span> /media {
        <span class="hljs-attribute">root</span> /path/to/your/directory;
    }
}

Save, exit and test NGINX, like this:

# nginx -t

Configure domain in ccnet.conf and seahub_setting.py

Modify the value of

SERVICE_URL

in ccnet.conf to let Seafile know the domain, protocol and port chosen:

# $EDITOR /var/www/seafile/conf/ccnet.conf

and make the change:

SERVICE_URL = http://seafile.mydomain.com

Save, exit and edit SeaHub configuration file:

# $EDITOR /var/www/seafile/conf/seahub_setting.py

There:

# FILE_SERVER_ROOT = 'http://seafile.mydomain.com/seafhttp'

Save, exit and restart services:

# systemctl restart seafile
# systemctl restart seahub

Test Seafile

With a web browser, go to URL: http://seafile.mydomain.com; it will show a login form in which you can enter the admin account info you previously created. That’s all! Now you can use Seafile like any other cloud storage system!

How to install and use lnav on CentOS 7

lnav on CentOS 7

Introduction

lnav, which stands for Log File Navigator, is a CLI-log file viewer built for small scale solutions. It is totally free, easy to use and customizable.
As all sysadmins know, GNU/Linux stores log files in the 

/var/log

 directory. So if any problems occur, the admin need to open and read various log files in that directory. In the absence of any additional tools, it is sometimes impossible to know which log file will contain the most relevant information, which necessitates manually going through all files, viewing their time-stamps and trying to diagnose the problem. This is a monotonous and time consuming task that you do not want to find yourself doing! Thankfully, with lnav sysadmins home users can accomplish this task simply and quickly. Let’s look at how to install lnav on CentOS 7.

Installing lnav on Centos 7

Installation is easy as reading this line. There are two ways:

  • through EPEL-repository
  • building from source
Using EPEL-repository

If not yet present, install the EPEL repository with:

# yum install epel-release

and then install the utility:

# yum install lnav

It’s a light piece of software (just 1.1MB) so it will be almost instantaneous.

Building from source

Download the source code:

$ wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1.tar.gz

Extract it with:

tar xf lnav-0.8.1.tar.gz

Lnav requires the following software packages:

  • libpcre – The Perl Compatible Regular Expression (PCRE) library
  • sqlite – The SQLite database engine
  • ncurses – The ncurses text UI library
  • readline – The readline line editing library
  • zlib – The zlib compression library
  • bz2 – The bzip2 compression library
  • re2c – The re2c scanner generator
  • libcurl – The cURL library for downloading files from URLs. Version 7.23.0 or higher is required

Next:

$ cd lnav-0.8.1
$ mkdir release
$ cd release
$ ../configure --prefix=/usr/local
$ make
# make install

Using lnav

Let’s first look at all the options taken by lnav:

usage: lnav [options] [logfile1 logfile2 ...]

A curses-based log file viewer that indexes log messages by type
and time to make it easier to navigate through files quickly.

Key bindings:
  ?     View/leave the online help text.
  q     Quit the program.

Options:
  -h         Print this message, then exit.
  -H         Display the internal help text.
  -I path    An additional configuration directory.
  -i         Install the given format files and exit.
  -C         Check configuration and then exit.
  -d file    Write debug messages to the given file.
  -V         Print version information.

  -a         Load all of the most recent log file types.
  -r         Load older rotated log files as well.
  -t         Prepend timestamps to the lines of data being read in
             on the standard input.
  -w file    Write the contents of the standard input to this file.

  -c cmd     Execute a command after the files have been loaded.
  -f path    Execute the commands in the given file.
  -n         Run without the curses UI. (headless mode)
  -q         Do not print the log messages after executing all
             of the commands or when lnav is reading from stdin.

Optional arguments:
  logfile1          The log files or directories to view.  If a
                    directory is given, all of the files in the
                    directory will be loaded.

Examples:
  To load and follow the syslog file:
    $ lnav

  To load all of the files in /var/log:
    $ lnav /var/log

  To watch the output of make with timestamps prepended:
    $ make 2>&1 | lnav -t
Running without arguments

As root, run lnav without arguments:

# lnav

It will open

/var/log/messages

log file, as in this screen:
1

The above command filtered log files for all <code>/var/log/</code> folders, as can be seen. It’s also possible to “specialize” the analysis to one directory. For instance:

# lnav /var/log/cups

The same thing can be done from inside a running session of

lnav

.

# lnav

Next, hit

:open /var/log/cups

 . Without closing the program, it will display CUPS log file informations. It can then be closed with

q

 ,which shows an interesting thing: lnav implements some keybindings from the VI text editor. To navigate inside a log file, admins can use

h,j,k,l

keys or arrow keys.

Old rotated log files

Sometimes, the informations are not available in the most recent files and it’s necessary to look into old log files. In these cases, use the

-r

option.

# lnav -r

Conclusions

This rapid overview has demonstrated how inspecting log files in a small scale system can be made easier by using this free and lightweight tool. It is available for all *nix platforms, so why not go ahead and use it! Everyone likes to avoid a headache…

How to Measure Disk Performance with fio and IOPing

fio and IOPing

Whether it’s a server, or a PC for work, what usually limits performances is disk speed. Even if using SSDs, their speed is not yet comparable to that of RAM and CPU.
There are different tools with or without a graphical interface, written for testing disks speed. There are also people who use

dd

, for example:

dd if=/dev/zero of=test_file bs=64k count=16k conv=fdatasync

 

However, in our opinion dd is the worst software for benchmarking I/O performance.

In fact:

  • it is a single-threaded, sequential-write test. Of course, if running a web server, services do not do long-running sequential writes, and use more than one thread
  • it writes a small amount of data, so the result can be influenced by caching or by RAID’s controller
  • it executes for just a few seconds, and everyone knows that in this way it’s not possible to have consistent results
  • there are no reading speed tests

All these points just lead to one conclusion: better to use anything else. For disk benchmarking there are two kind of parameters that give a complete overview: IOPS (I/O Per Second) and latency. This tutorial explains how to measure IOPS with

fio

, and disk latency with

IOPing

on a RHEL 7 system.

Install fio

First of all, install the EPEL repository:

# wget https://mirrors.n-ix.net/fedora-epel/epel-release-latest-7.noarch.rpm
# yum localinstall epel-release-latest-7.noarch.rpm

Next, install fio with yum:

# yum install fio

Testing IOPS with fio

RW Performance

The first test is for measuring random read/write performances. In a terminal, execute the following command:

# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

During the test, the terminal window will display an output like the following one:

test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.8
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [0.1% done] [447KB/131KB/0KB /s] [111/32/0 iops] [eta 01h:Jobs: 1 (f=1): [m(1)] [0.1% done] [383KB/147KB/0KB /s] [95/36/0 iops] [eta 01h:4Jobs: 1 (f=1): [m(1)] [0.1% done] [456KB/184KB/0KB /s] [114/46/0 iops] [eta 01h:Jobs: 1 (f=1): [m(1)] [0.1% done] [624KB/188KB/0KB /s] [156/47/0 iops] [eta 01h:Jobs: 1 (f=1): [m(1)] [0.1% done] [443KB/115KB/0KB /s] [110/28/0 iops] [eta 01h:Jobs: 1 (f=1): [m(1)] [0.1% done] [515KB/95KB/0KB /s] [128/23/0 iops] [eta 01h:4Jobs: 1 (f=1): [m(1)] [0.1% done] [475KB/163KB/0KB /s] [118/40/0 iops] [eta 01h:Jobs: 1 (f=1): [m(1)] [0.2% done] [451KB/127KB/0KB /s] [112/31/0 iops]

So, the program will create a 4GB file (

--size=4G

), and perform 4KB reads and writes using three reads for every write ratio (75%/25%, as specified with option

--rwmixread=75

), split within the file, with 64 operations running at a time. The RW ratio can be adjusted for simulating various usage scenarios.
At the end, it will display the final results:

test: (groupid=0, jobs=1): err= 0: pid=4760: Thu Mar  2 13:23:28 2017
  read : io=7884.0KB, bw=864925B/s, iops=211, runt=  9334msec
  write: io=2356.0KB, bw=258468B/s, iops=63, runt=  9334msec
  cpu          : usr=0.46%, sys=2.35%, ctx=2289, majf=0, minf=29
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=1971/w=589/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=7884KB, aggrb=844KB/s, minb=844KB/s, maxb=844KB/s, mint=9334msec, maxt=9334msec
  WRITE: io=2356KB, aggrb=252KB/s, minb=252KB/s, maxb=252KB/s, mint=9334msec, maxt=9334msec

Disk stats (read/write):
    dm-2: ios=1971/589, merge=0/0, ticks=454568/120101, in_queue=581406, util=98.44%, aggrios=1788/574, aggrmerge=182/15, aggrticks=425947/119120, aggrin_queue=545252, aggrutil=98.48%
  sda: ios=1788/574, merge=182/15, ticks=425947/119120, in_queue=545252, util=98.48%

Note from the author: I ran fio on my laptop, so the last output was obtained running the test with a 10MB file; as can be seen above, the 4GB option would have taken more than 1 hour.

Random read performance

In this case, the command is:

# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read.fio --bs=4k --iodepth=64 --size=4G --readwrite=randread

The output will be similar to the RW case, just specialized in the read case.

Random write performance
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

As above, in random write case.

Latency measures with IOPing

As stated in the introduction, the second part of a benchmark is the latency measurement. To accomplish this task, install IOPing, also available in the EPEL repository.

# yum install ioping

Execute it:

# ioping -c 100 .

The

-c 100

option is the number request ioping will make. The program takes also as argument the file and/or device to check. In this case, the actual working directory. Program output is:

4 KiB <<< . (xfs /dev/dm-2): request=1 time=16.3 ms (warmup)
4 KiB <<< . (xfs /dev/dm-2): request=2 time=253.3 us
4 KiB <<< . (xfs /dev/dm-2): request=3 time=284.0 ms
...
4 KiB <<< . (xfs /dev/dm-2): request=96 time=175.6 us (fast)
4 KiB <<< . (xfs /dev/dm-2): request=97 time=258.7 us (fast)
4 KiB <<< . (xfs /dev/dm-2): request=98 time=277.6 us (fast)
4 KiB <<< . (xfs /dev/dm-2): request=99 time=242.3 us (fast)
4 KiB <<< . (xfs /dev/dm-2): request=100 time=36.1 ms (fast)

--- . (xfs /dev/dm-2) ioping statistics ---
99 requests completed in 3.99 s, 396 KiB read, 24 iops, 99.3 KiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 163.5 us / 40.3 ms / 760.0 ms / 118.5 ms

Last line shows the latency measures of the disk.

Conclusion:

So here there are two useful tools which can give information about the status of disk speed and latency, with very clear outputs, and customizable for various test cases.

How to install Visual Studio Code and .NET Core on RHEL 7

Visual Studio Code

Introduction

It was November 2014 when Microsoft announced the open sourcing of .NET with a project named .NET Core. It was announced as a smaller set of the .NET Framework, with many of the same APIs, and including “runtime, framework, compiler and tools components that support a variety of operating systems and chip targets” as stated in MSDN. This was an important announcement because .NET is a widely-used general development platform.
One year later, Red Hat and Microsoft announced a collaboration which resulted in access to .NET on Red Hat Enterprise Linux and Red Hat OpenShift.

In June 2016, Red Hat announced that .NET would be available via the integrated hybrid support partnership between both companies, making “Red Hat the only commercial Linux distribution to feature full, enterprise-grade support for .NET, opening up platform choice for enterprises seeking to use .NET on a flexible Linux and container-based environments.”

Enable .NET Core repositories

If you use RHEL 7 for development, you should have a subscription as a developer (at no-cost through the Red Hat Developer Toolset).

So, check it with:
# subscription-manager repos –list | egrep dotnet

If there are repositories listed, it means that .NET can be installed. So, enable the repo:

# subscription-manager repos --enable=rhel-7-server-dotnet-rpms

Install .NET Core

Once the repository is enabled, it’s possible to install .NET with yum. Just:

# yum install rh-dotnetcore11

Working with .NET Core

The .NET Core packages are designed to allow multiple versions of software to be installed concurrently. To allow this, every package is added to the runtime environment with the command

scl enable

. When running, it environment variables and then runs the specified command. Its changes only affect the command that is run by

scl

and processes that are run from that command. This helps in maintaining a “clean” environment.

So, in a new terminal window, as user, execute the command:

$ scl enable rh-dotnetcore11 bash

This will start a new Bash instance, which has access to .NET Core 1.1 (the one installed previously).
Execute:

dotnet --version

for checking if everything works.
Now, just executing the

exit

 command will shut down .NET and return to a “normal” Bash.

Install Visual Studio Code

GNU/Linux systems have a lot of text editors and IDEs, so anyone can use the tool he likes the most for writing code.
But, in the case of .NET Core and C#, a good option could be Visual Studio Code, the open source editor written by Microsoft.
To install the 64-bit code editor on Red Hat Enterprise Linux 7, execute the following steps:

First, set up the yum repository as follows:

# rpm --import https://packages.microsoft.com/keys/microsoft.asc
# sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'

Next, update the package cache and install Visual Studio with:

# yum check-update
# yum install code

Using Visual Studio Code with .NET Core

Using

scl

start the editor with .NET Core. The syntax is the same as the “Bash example”:

$ scl enable rh-dotnetcore11 code

Click on File->Open Folder and open the folder in which the Hello World program will be saved; Visual Studio Code will ask to install a C# extension. Do it.
Next, open the integrated terminal from Visual Studio Code by typing CTRL+\, as suggested in the Welcome page of the editor.
There, execute:

$ dotnet new

This will create two files:

Program.cs

and

package.json

. Open the first one by clicking on it in the left sidebar, and it should contain a simple Hello World program. Probably the editor would suggest to solve some dependency; just click on Yes and wait.
For running the program, open the integrated terminal and execute:

$ dotnet run

And that’s all that is needed to start working with .NET Core and Visual Studio Code on Red Hat Enterprise Linux 7!

How to install Mattermost on RHEL 7.1

Mattermost

Introduction

Mattermost is an open source, private cloud Slack-alternative. A workplace messaging system for web, PCs and phones, released under the MIT license.
In a previous tutorial we talked about how to install it on Ubuntu 16.04.
Now, let’s see how to install and configure Mattermost on a RHEL 7.1 machine using MySQL as the database.

Install Database

On the server, download MySQL 5.7 executing the following command:

# wget http://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm

and install the yum repository from that file with:

# yum localinstall mysql57-community-release-el7-9.noarch.rpm

Next, install MySQL:

# yum install mysql-community-server

and start it:

# systemctl start mysqld

After executing this command for the first time, MySQL will generate a temporary password for the root account. To retrieve it, just:

# grep 'temporary password' /var/log/mysqld.log

This command will output something like this:

2017-03-02T08:21:27.969295Z 1 [Note] A temporary password is generated for root@localhost: Ed4SxpDyuH(y

Change the root password. First, login as root:

# mysql -u root -p

Enter the temporary password.
Next, in the MySQL shell:

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'my_new_root_strong_password';
mysql> EXIT;

Set MySQL to start automatically at boot time:

# chkconfig mysqld on

Start the MySQL shell again:

# mysql -u root -p

Entering the new root password, create a user for Mattermost and a new database:

mysql> CREATE USER 'mmuser'@'localhost' IDENTIFIED BY 'mmuser_strong_password';
mysql> CREATE DATABASE mattermostdb;
mysql> GRANT ALL PRIVILEGES ON mattermostdb.* TO 'mmuser'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;

Install Mattermost Server

Download the latest release of Mattermost Server. For example only, at the time we are writing:

# wget https://releases.mattermost.com/3.6.2/mattermost-3.6.2-linux-amd64.tar.gz

Extract the archive, and move the ‘mattermost’ folder to

/opt
# tar xf *.gz
# mv mattermost /opt/

Create a directory for storage files:

# mkdir /opt/mattermost/data

Make sure that the drive is large enough to hold the anticipated number of uploaded files and images that will be stored on

data

.
Next, set up a user and group, both named ‘mattermost’, and set the ownership and permissions:

# useradd --system --user-group mattermost
# chown -R mattermost:mattermost /opt/mattermost
# chmod -R g+w /opt/mattermost

Set up the database driver through the

/opt/mattermost/config/config.json

file. In it, search for “DriverName” and “DataSource” lines and change as follows:

"DriverName": "mysql"
"DataSource": "mmuser:@tcp(localhost:3306)/mattermost?charset=utf8"
Save, exit, and test the Mattermost Server with the following command:
# sudo -u mattermost /opt/mattermost/bin/platform

If everything works, it should output

 Server is listening on :8065

 . Interrupt it with CTRL+C.

Create a systemd unit.

Create a systemd file for Mattermost,

/etc/systemd/system/mattermost.service

and, in it, paste the following configuration:

[Unit]
Description=Mattermost
After=syslog.target network.target postgresql-9.4.service

[Service]
Type=simple
WorkingDirectory=/opt/mattermost/bin
User=mattermost
ExecStart=/opt/mattermost/bin/platform
PIDFile=/var/spool/mattermost/pid/master.pid
LimitNOFILE=49152

[Install]
WantedBy=multi-user.target

Make it executable:

# chmod 664 /etc/systemd/system/mattermost.service

And reload the services:

# systemctl daemon-reload

Enable Mattermost service:

# chkconfig mattermost on

And start it with systemd:

# systemctl start mattermost

Check if it’s running visiting the URL http://localhost:8065.

Install and configure NGINX

Installation

In a production system, use a proxy server in front of Mattermost Server. In this case, NGINX.
The main benefits of doing this are:

  • SSL termination
  • Port mapping :80 to :8065
  • HTTP to HTTPS redirect
  • Standard request logs

In order to install NGINX on RHEL 7.1, create a yum repository file,

/etc/yum.repos.d/nginx.repo

, with the following content:

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/rhel/7.1/$basearch/
gpgcheck=0
enabled=1

Save, exit and install NGINX with yum:

# yum install nginx.x86_64

Start NGINX and test it:

# systemctl start nginx
Configuration

In order to configure NGINX as proxy server, create the file

/etc/nginx/sites-available/mattermost

and past:

upstream backend {
   server localhost:8065;
}

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
   listen 80;
   server_name    mattermost.mydomain.com;

   location /api/v3/users/websocket {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_pass http://backend;
   }

   location / {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_pass http://backend;
   }
}

Remove the existing default site-enabled file with:

# rm /etc/nginx/sites-enabled/default

and enable Mattermost:

# ln -s /etc/nginx/sites-available/mattermost /etc/nginx/sites-enabled/mattermost

Restart NGINX:

# systemctl restart nginx

Conclusions

At the end of this process, the server should be up and running. With a web browser go to URL http://mattermost.mydomain.com and continue to configure Mattermost by entering an email address and creating an account.
That’s all! The server is ready to serve as your messaging system!

How to install Nextcloud on CentOS 7

NextCloud

Introduction

In a previous tutorial we talked about the installation of Nextcloud on an Ubuntu 16.04 server with Apache. Remember, Nextcloud is a cloud storage system. In this guide we’ll look at how to install and configure it on a CentOS 7 system, with Nginx as the web server, and MariaDB as the database.

Install Nginx and PHP7-FPM

First of all, add the EPEL repository, which contains Nginx:

# yum install epel-release

Next, install Nginx:

# yum install nginx

PHP7-FPM is available on an external repository. Yu want to use the webtatic one. To add it:

# rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

Now, it’s possible to install PHP7-FPM and some Nextcloud dependencies:

# yum install php70w-fpm php70w-pecl-apcu-devel php70w-json php70w-pecl-apcu php70w-gd php70w-mcrypt php70w-mysql php70w-cli php70w-pear php70w-xml php70w-mbstring php70w-pdo

Check the PHP version to be sure that everything went well, with:

# php -v

Configure PHP-FPM

After installation, a configuration of PHP is required for use with Nginx. With a text editor, edit the

/etc/php-fpm.d/www.conf

file. In there, search lines containing user and group strings and modify as follows:

user = nginx
group = nginx

In the same file, look for listen string, and modify that too:

listen = 127.0.0.1:9000

PHP will listen on port 9000.
Uncomment the following lines:

env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Save and exit.

Create a new directory in

/var/lib

and change its owner to nginx user:

# mkdir -p /var/lib/php/session
# chown nginx:nginx -R /var/lib/php/session/

Start and enable both Nginx and PHP7-FPM:

# systemctl start php-fpm
# systemctl start nginx
# systemctl enable php-fpm
# systemctl enable nginx

Install MariaDB

As previously said, MariaDB will be the database system, so install it like this:

# yum install mariadb-server mariadb

Next:

# systemctl start mysql
# systemctl start mysql

Then, configure the root account for MariaDB:

# mysql_secure_installation
Set root password? [Y/n]
New password: my_strong_root_password
Re-enter new password: my_strong_root_password

Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Remove test database and access to it? [Y/n]
Reload privilege tables now? [Y/n]

Now, it’s time to login to MariaDB and configure it for use with Nextcloud:

# mysql -u root -p

In its shell:

mysql> CREATE DATABASE my_nextclouddb;
mysql> CREATE USER ncuser@localhost IDENTIFIED BY 'ncuser@';
mysql> GRANT ALL PRIVILEGES ON my_nextclouddb.* TO ncuser@localhost IDENTIFIED BY 'ncuser@';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;

Generate a SSL certificate

For using Nextcloud with HTTPS connection with the client, you’ll need an SSL certificate. Generate a self-signed one with OpenSSL. First, create a new directory for that file:

# mkdir -p /etc/nginx/cert/

and generate it:

# openssl req -new -x509 -days 365 -nodes -out /etc/nginx/cert/nextcloud.crt -keyout /etc/nginx/cert/nextcloud.key

N.B: the /etc/nginx/cert/ will contain all the SSL certificates your server will require eventually.

Change permissions:

# chmod 700 /etc/nginx/cert
# chmod 600 /etc/nginx/cert/*

Install Nextcloud

Now it’s time to download and install Nextcloud. Download the archive with:

# https://download.nextcloud.com/server/releases/nextcloud-11.0.2.zip

Extract it and move to

/usr/share/nginx/html/
# unzip nextcloud-11.0.2.zip
# mv nextcloud/ /usr/share/nginx/html/

Create a new

data

directory for Nextcloud:

# mkdir -p /usr/share/nginx/html/nextcloud/data/

Change the owner of

nextcloud

to nginx user:

# chown nginx:nginx -R /usr/share/nginx/html/nextcloud

Configure a Virtual Host for Nextcloud

Create a new Virtual Host configuration file,

/etc/nginx/conf.d/nextcloud.conf

. There, paste the following configuration:

upstream php-handler {
    server 127.0.0.1:9000;
    #server unix:/var/run/php5-fpm.sock;
}
 
server {
    listen 80;
    server_name storage.mydomain.com;
    # enforce https
    return 301 https://$server_name$request_uri;
}
 
server {
    listen 443 ssl;
    server_name storage.mydomain.com;
 
    ssl_certificate /etc/nginx/cert/nextcloud.crt;
    ssl_certificate_key /etc/nginx/cert/nextcloud.key;
 
    # Add headers to serve security related headers
    # Before enabling Strict-Transport-Security headers please read into this
    # topic first.
    add_header Strict-Transport-Security "max-age=15768000;
    includeSubDomains; preload;";
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;
 
    # Path to the root of your installation
    root /usr/share/nginx/html/nextcloud/;
 
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }
 
    # The following 2 rules are only needed for the user_webfinger app.
    # Uncomment it if you're planning to use this app.
    #rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
    #rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
    # last;
 
    location = /.well-known/carddav {
      return 301 $scheme://$host/remote.php/dav;
    }
    location = /.well-known/caldav {
      return 301 $scheme://$host/remote.php/dav;
    }
 
    # set max upload size
    client_max_body_size 512M;
    fastcgi_buffers 64 4K;
 
    # Disable gzip to avoid the removal of the ETag header
    gzip off;
 
    # Uncomment if your server is build with the ngx_pagespeed module
    # This module is currently not supported.
    #pagespeed off;
 
    error_page 403 /core/templates/403.php;
    error_page 404 /core/templates/404.php;
 
    location / {
        rewrite ^ /index.php$uri;
    }
 
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        deny all;
    }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
        deny all;
    }
 
    location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
        include fastcgi_params;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param HTTPS on;
        #Avoid sending the security headers twice
        fastcgi_param modHeadersAvailable true;
        fastcgi_param front_controller_active true;
        fastcgi_pass php-handler;
        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
    }
 
    location ~ ^/(?:updater|ocs-provider)(?:$|/) {
        try_files $uri/ =404;
        index index.php;
    }
 
    # Adding the cache control header for js and css files
    # Make sure it is BELOW the PHP block
    location ~* \.(?:css|js)$ {
        try_files $uri /index.php$uri$is_args$args;
        add_header Cache-Control "public, max-age=7200";
        # Add headers to serve security related headers (It is intended to
        # have those duplicated to the ones above)
        # Before enabling Strict-Transport-Security headers please read into
        # this topic first.
        add_header Strict-Transport-Security "max-age=15768000;
        includeSubDomains; preload;";
        add_header X-Content-Type-Options nosniff;
        add_header X-Frame-Options "SAMEORIGIN";
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Robots-Tag none;
        add_header X-Download-Options noopen;
        add_header X-Permitted-Cross-Domain-Policies none;
        # Optional: Don't log access to assets
        access_log off;
    }
 
    location ~* \.(?:svg|gif|png|html|ttf|woff|ico|jpg|jpeg)$ {
        try_files $uri /index.php$uri$is_args$args;
        # Optional: Don't log access to other assets
        access_log off;
    }
}

Save, exit and test Nginx with:

# nginx -t

Then, restart it:

# systemctl restart nginx

Conclusions

The last thing to do is to complete a graphical installation wizard. With a web browser go to storage.mydomain.com, create an admin account and enter informations about the database created in the previous steps.
At the end, a complete Dropbox-like storage system will be available on the server!

What your inner computer geek should know about Python and Linux

Python and Linux

Having been around computers since my youngest years I have always been curious how they work, so I started playing around with them. I love computers… because they do what you tell them to do!

Being a computer geek is really a nice skill to have nowadays as almost every task we depend on in our daily lives involves a machine. Unless you are some kind of hippie living way off the grid, you’re using machines constantly.

To be honest with you, geeky computer skills do not just come to you overnight. There is always a lot of study and practice involved. No doubt, there will also be some frustration.

With the vast number of technologies currently available on the internet, it is real hard for a novice computer user to decide where to start and what to study. Not only this, but sometimes the documentation is boring or poorly written… there are a lot of reasons to give up!

Coming from a lot of struggle myself, I’ve decided to compile a few tips on two of the technologies every aspiring computer geek should start to learn: Python and Linux.

Linux is an operating system like the others

There is so much buzz on the internet about Linux being an advanced operating system only guys with a beard like Richard Stallman can use. Even though many believe this to be true, I’ve never had such a beard!

My skills are not bad when it comes to the terminal. I have managed to master the basics, such as connecting to a remote server via ssh, listing files inside a directory, copying data and even googling through the terminal. It is really a pretty cool program once you get the hang of it!

Like Windows or Mac Os X, Linux offers most of the tools one would need to fill their needs such as a document reader, word processor, media player and even simple video editors. What makes Linux different is its open source philosophy which means that the code which is being used to run the operating system is available for the public giving geeks like us the opportunity to learn how stuff works under the hood.

The terminal myth

It is not true that one cannot work on Linux without command line skills. There is a graphical user interface which allows the user to interact with the operating system making it easy even for beginners to feel comfortable using Linux.

Unless you plan to run a Linux server, being a professional at the console is not required to make use of any of the distributions available out there.

Linux is free as in beer

Anyone can download a Linux distribution for free and install it on as many computers as they want. The folks behind Linux encourage anyone to share their downloaded copy with their friends or neighbors.

As we all know, sharing is caring guys!

Linux performs better than any other operating system I have used

It is far from perfect, but for the same resources Linux performs better than Windows 7 or Mac Os X. This is subjective, but it is not only my experience. There are many other computer folks who share the same opinion.

Linux has a very warm community

The Linux community is very welcoming to the beginners offering them answers for any question they may have. The feeling of being stuck once you get one of the Linux distributions for the first time vanishes completely as soon as you post the problem you are experiencing in one of the forums where geeks like me love to spend their time.

Always ask questions, everyone in the Linux community will be happy to help!

Linux offers many useful command line utilities

There are plenty of utilities available for Linux distributions, such as: the file downloader over the http protocol, image viewer, calendar app, calculator and even multimedia frameworks such as ffmpeg.

Thanks to the developers who contribute to open source Linux, geeks have plenty of tools to choose from to accomplish their daily computer tasks.

For example, one can use a special utility on their Linux machine to remove the exif data from an image file so their home gps coordinates do not get published worldwide.

Linux is way more secure than Windows

Due the major distribution all over the world the Windows operating system is a target for many computer pirates who want to get famous or make cash by creating and spreading viruses on the Internet.

There are Linux trojans out there, but since computer criminals have more experience creating Windows malware it is easier to rat a Windows 7 machine than the latest Ubuntu flavor.

A little bit about python

Enough with Linux. Another great technology which is very useful when it comes to automating stuff is the Python computer programming language which was created by a mastermind named Guido Van Rossum.

Python is open source

Like most of the great programming languages the Python code is open for the public and hosted on many source code platforms for anyone to download. A passionate soul is free to study the source code of this programming language in order to learn how it works so they can produce high quality code when writing applications.

The Python community is the best

I had the chance to attend Europython 2013 and I can say it was truly a great experience. The Python guys are so warm they feel like family! Everyone is so nice to you and they are ready and willing to help you improve your Python skills.

Python is used by Google and Youtube

Both Google and Youtube make use of Python for developing their applications.

Python is perfect for scripting

It’s high level characteristics make Python the swiss army knife of programs when it comes to automating, scanning a network by interacting with the Nmap tool for example. There are many ways one can make use of Python to automate their daily tasks as there are many libraries available for the job.

Python syntax feels like human language

Every programming language defines its own syntax which coders and developers need to learn in order to write something that works without producing errors. The syntax of python feels like the human language to me.

For example, the following piece of code is used to print a word in the console for the user to view:

print "unixmen.com"

It feels like communicating the english language to a machine, doesn’t it?

Python is an interpreted language

You run python code within an interpreter. The good thing about this is that you get the result of your code immediately without it waiting to compile, like in the C programming language for example.

Python is a high level language that is free as in beer

Compared to computer programming languages such as C, C ++ or Java, Python is a very high level language. Not only is it open source, it is also free to use the Python tools offered by the python.org site. You don’t need to pay for a license or anything else in order to code tools in this lovely programming language.

Python is used in the security field

Security professionals make use of the Python programming language to write their tools and exploits. There are so many security related open source tools coded in Python that can be found on github.com. Some of these tools come shipped with Linux distributions.

Python can be used as a web development tool

One thing I noticed at the EuroPython conference was the presence of many web developers that made use of Python and some other frameworks such as Django for coding web applications.

Everything is going on the web, it is a great skill to build web applications at the moment as the industry really needs talented developers.

Conclusion

Linux is a great technology which is used not only among geeks, but also in the banking industry, military and big data centers. Learning Linux is fun and it increases the chances for one to get a job in the IT field. But what is a Linux user without knowledge on programming languages? A fish in the aquarium!

Install Mattermost with PostgreSQL on Ubuntu 16.04

Mattermost

Introduction

Mattermost is a workplace messaging system for web, PCs and phones. It’s an open source alternative to Slack.
A complete Mattermost installation consists of three components: the Mattermost server, a proxy server, and a database server. Each component can be installed on the same machine, or on three different machines on the local network.
This tutorial explains how to install Mattermost Team Edition on an Ubuntu 16.04 with Nginx as your proxy server and PostgreSQL as your database.

Install PostgreSQL

On the server that will host the database, execute the following command:

# apt install postgresql postgresql-contrib

During the installation process a new user, “postgres”, will be created. So, at the end, log in with the command:

$ sudo --login --user postgres

Start the PostgreSQL shell:

$ psql

Create a new database:

postgres=# CREATE DATABASE mattermost_db;
postgres=# CREATE USER mmuser WITH PASSWORD 'my_strong_password';
postgres=# GRANT ALL PRIVILEGES ON DATABASE mattermost_db to mmuser;
postgres=# \q

Log out of this account, and allow PostgresSQL to listen on all assigned IPs, editing the file

/etc/postgresql/9.3/main/postgresql.conf

There, find line

#listen_addresses = 'localhost'

uncomment it and change ‘localhost’ to ‘*’; so, after editing:

listen_addresses = *

Reload the database with:

# systemctl reload postgresql

Now that the database is installed, it’s time to install the Mattermost Server.

Install Mattermost Server

Download the latest version of the Mattermost Server with wget. Right now, this is 3.6.2:

# wget https://releases.mattermost.com/3.6.2/mattermost-3.6.2-linux-amd64.tar.gz

Extract the archive, and move to a different folder; for instance, in this example Mattermost will be stored on

/opt

:

# mv mattermost /opt

Create a storage directory, named “data”, in this folder:

# mkdir /opt/mattermost/data

The storage directory will contain all the files and images that users post to Mattermost, so it’s necessary that the drive that contains it is large enough.

Next, create a new user and a new group both named

mattermost

:

# useradd --system --user-group mattermost

Set the user and group mattermost as the owner of the Mattermost files with the command:

$ sudo chown -R mattermost:mattermost /opt/mattermost

Give the permissions to the Mattermost group:

sudo chmod -R g+w /opt/mattermost

Next, set up the database driver:

# $EDITOR /opt/mattermost/config/config.json

In that file, in the “SqlSettings” section, change “DriverName” and “DataSource” lines as follows:

"DriverName": "postgres"
"DataSource": "postgres://mmuser:my_strong_password@127.0.0.1:5432/mattermost_db?sslmode=disable&connect_timeout=10",

Save and exit. Test the configuration, using “mattermost” user:

$ sudo -u mattermost /opt/mattermost/bin/platform

You will see that Mattermost is running on 127.0.0.1:8065. Stop it with

CTRL+c

.

Next, create a new mattermost service file for systemd:

# $EDITOR /etc/systemd/system/mattermost.service

And paste into this file the following text:

[Unit]
Description=Mattermost, an open source alternative to Slack
After=network.target
After=postgresql.service
Requires=postgresql.service

[Service]
Type=simple
ExecStart=/opt/mattermost/bin/platform
Restart=always
RestartSec=10
WorkingDirectory=/opt/mattermost
User=mattermost
Group=mattermost

[Install]
WantedBy=multi-user.target

Save and exit. Then:

# systemd daemon-reload

Now, start the service:

# systemctl start mattermost

Install Nginx

Install Nginx and configure Nginx as the reverse proxy for Mattermost. First:

# apt install nginx

Next:

# mkdir /etc/nginx/ssl
# cd /etc/nginx/ssl

Generate a new self-signed SSL certificate file:

# openssl req -new -x509 -days 365 -nodes -out /etc/nginx/ssl/mattermost.crt -keyout /etc/nginx/ssl/mattermost.key
# chmod 400 mattermost.key

These will create a new certificate and change its permissions.
Next, create a new configuration file for mattermost in

/etc/nginx/sites-available

:

# $EDITOR /etc/nginx/sites-available/mattermost

and paste in it:

server {
   listen         80;
   server_name    mattermost.example.com;
   return         301 https://$server_name$request_uri;
}
 
server {
   listen 443 ssl;
   server_name mattermost.example.com;
 
   ssl on;
   ssl_certificate /etc/nginx/ssl/mattermost.crt;
   ssl_certificate_key /etc/nginx/ssl/mattermost.key;
   ssl_session_timeout 5m;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:10m;
 
   location / {
      gzip off;
      proxy_set_header X-Forwarded-Ssl on;
      client_max_body_size 50M;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Frame-Options SAMEORIGIN;
      proxy_pass http://127.0.0.1:8065;
   }
}

Make sure to customize values for the Mattermost server IP address and FQDN for

server_name

.

Save, exit and activate the virtual host:

# ln -s /etc/nginx/sites-available/mattermost /etc/nginx/sites-enabled/

Test the configuration with:

# nginx -t

and then restart it, like this:

# systemctl restart nginx

Testing and conclusion

Last thing to do is a “graphical” configuration. With a web browser, go to URL mattermost.example.com.
There, create a new account. A dashboard will appear in which it’s possible to create new teams and/or access the admin/system console.
That’s all that is required for installing the messaging infrastructure for Mattermost.

How to install ownCloud 9.1.4 on CentOS 7

ownCloud 9.1.4

Introduction

OwnCloud 9.1.4 is an open source software for file sharing and data synchronization that is very useful in the enterprise sector, with an easy to use front-end web format.

This tutorial is about installing ownCloud on CentOS 7 with Nginx as your web server.

Install Nginx and PHP

First, install Nginx. This web server is available on EPEL repository, so just add it like this:

# yum install epel-release

and then:

# yum install nginx

Next, install PHP-FPM (FastCGI Process Manager), using webtatic repository, which is added with the following command:

# rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

Now it is possible to install PHP with other packages required by ownCloud:

# yum install php70w-fpm php70w-cli php70w-json  php70w-mcrypt  php70w-pear php70w-mysql php70w-xml php70w-gd php70w-mbstring php70w-pdo

Configure PHP-FPM for Nginx

PHP-FPM configuration is done by editing the php7-fpm configuration file:

# $EDITOR /etc/php-fpm.d/www.conf

Search lines containing “user” and “group” and change with:

user = nginx
group = nginx

Scroll down, looking for line “listen”, and change the content to:

listen = 127.0.0.1:9000

Next, uncomment the following lines about environment variables:

env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

Save and exit.
Now, it’s time to create a new folder in

/var/lib/

, with the following command:

# mkdir -p /var/lib/php/session

Change its owner to nginx user:

# chown nginx:nginx -R /var/lib/php/session/

Start nginx and PHP-FPM:

# sudo systemctl start php-fpm
# sudo systemctl start nginx

Add to start at boot time (required for daily usage for the machine as server) :

# systemctl enable nginx
# systemctl enable php-fpm

Install MariaDB

MariaDB is available in the CentOS repository, so install it with:

# yum install mariadb mariadb-server

Configure the MariaDB root password:

# mysql_secure_installation

During the process, answer the following questions:

Set root password? [Y/n]
New password:
Re-enter new password:

Remove anonymous users? [Y/n]
Disallow root login remotely? [Y/n]
Remove test database and access to it? [Y/n]
Reload privilege tables now? [Y/n]

Login to the MariaDB shell to create a new database and user for ownCloud. In this example, my_owncloud_db is the database name and ocuser is its user. The password is: my_strong_password.
So, execute the command:

# mysql -u root -p

and then:

mysql> CREATE DATABASE my_owncloud_db;
mysql> CREATE USER ocuser@localhost IDENTIFIED BY 'my_strong_password';
mysql> GRANT ALL PRIVILEGES ON my_owncloud_db.* to ocuser@localhost IDENTIFIED BY 'my_strong_passowrd';
mysql> FLUSH PRIVILEGES;

Generate a SSL Certificate

If none exists, create a new directory for the SSL file:

# mkdir -p /etc/nginx/cert/

Next, generate a new SSL certificate file:

# openssl req -new -x509 -days 365 -nodes -out /etc/nginx/cert/owncloud.crt -keyout /etc/nginx/cert/owncloud.key

Change the permissions with the following command:

# chmod 600 /etc/nginx/cert/*

Download ownCloud

Download ownCloud Server:

# wget https://download.owncloud.org/community/owncloud-9.1.4.zip

Extract the archive and move it to

/usr/share/nginx/html/

:

# unzip owncloud-9.1.2.zip
# mv owncloud/ /usr/share/nginx/html/

Go to the Nginx root directory; there, create a new

data

directory for ownCloud:

# cd /usr/share/nginx/html/
# mkdir -p owncloud/data/

Configure a Virtual Host in Nginx

Create a Virtual Host configuration file with the following command:

# $EDITOR /etc/nginx/conf.d/owncloud.conf

Paste the following text into the file:

 upstream php-handler {
    server 127.0.0.1:9000;
    #server unix:/var/run/php5-fpm.sock;
}
 
server {
    listen 80;
    server_name data.owncloud.co;
    # enforce https
    return 301 https://$server_name$request_uri;
}
 
server {
    listen 443 ssl;
    server_name storage.example.com;
 
    ssl_certificate /etc/nginx/cert/owncloud.crt;
    ssl_certificate_key /etc/nginx/cert/owncloud.key;
 
    # Add headers to serve security related headers
    # Before enabling Strict-Transport-Security headers please read into this topic first.
    add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;
 
    # Path to the root of your installation
    root /usr/share/nginx/html/owncloud/;
 
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }
 
    # The following 2 rules are only needed for the user_webfinger app.
    # Uncomment it if you're planning to use this app.
    #rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
    #rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
 
    location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
    }
    location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
    }
 
    location /.well-known/acme-challenge { }
 
    # set max upload size
    client_max_body_size 512M;
    fastcgi_buffers 64 4K;
 
    # Disable gzip to avoid the removal of the ETag header
    gzip off;
 
    # Uncomment if your server is build with the ngx_pagespeed module
    # This module is currently not supported.
    #pagespeed off;
 
    error_page 403 /core/templates/403.php;
    error_page 404 /core/templates/404.php;
 
    location / {
        rewrite ^ /index.php$uri;
    }
 
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        return 404;
    }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
        return 404;
    }
 
    location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param HTTPS on;
        fastcgi_param modHeadersAvailable true; #Avoid sending the security headers twice
        fastcgi_param front_controller_active true;
        fastcgi_pass php-handler;
        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
    }
 
    location ~ ^/(?:updater|ocs-provider)(?:$|/) {
        try_files $uri $uri/ =404;
        index index.php;
    }
 
    # Adding the cache control header for js and css files
    # Make sure it is BELOW the PHP block
    location ~* \.(?:css|js)$ {
        try_files $uri /index.php$uri$is_args$args;
        add_header Cache-Control "public, max-age=7200";
        # Add headers to serve security related headers (It is intended to have those duplicated to the ones above)
        # Before enabling Strict-Transport-Security headers please read into this topic first.
        #add_header Strict-Transport-Security "max-age=15552000; includeSubDomains";
        add_header X-Content-Type-Options nosniff;
        add_header X-Frame-Options "SAMEORIGIN";
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Robots-Tag none;
        add_header X-Download-Options noopen;
        add_header X-Permitted-Cross-Domain-Policies none;
        # Optional: Don't log access to assets
        access_log off;
    }
 
    location ~* \.(?:svg|gif|png|html|ttf|woff|ico|jpg|jpeg)$ {
        try_files $uri /index.php$uri$is_args$args;
        # Optional: Don't log access to other assets
        access_log off;
    }
}

Save and exit. Next, test Nginx:

# nginx -t

This should display a “Syntax OK” message.

Restart Nginx:

# systemctl restart nginx

Conclusion

The server side configuration is complete. The last thing to do is to go to your ownCloud server URL (storage.example.com in this example) with a web browser and finish the configuration with the graphical front-end. Do this by creating a new admin account, and entering database credentials created in the previous steps. Your cloud storage service is now ready for a daily usage!

How to install and use EncryptPad on Ubuntu 16.04

EncryptPad
Safety concept: Opened Padlock on digital background

Introduction

EncryptPad is an application for editing symmetrically encrypted text. It provides a tool for encrypting and decrypting binary files, and it uses the format OpenPGP RFC 4880.
Some of its features include:

  • Symmetric encryption;
  • Passphrase and key file protections;
  • Encryption of binary files;
  • File format compatible with OpenPGP;
  • Cipher and hash algorithms;
  • Integrity protection with SHA-1

Installation

EncryptPad is available on an external repository; to add it, execute the following command:

# apt-add-repository ppa:nilarimogard/webupd8

Update the apt repositories database:

# apt update

And next, install EncryptPad:

# apt install encryptpad encryptcli

Confirm the download and installation; after that, it should be located on the system main menu.

Testing

Once launched, EncryptPad appears as a simple text editor, with a menu bar on top for performing operations.

1Now, let’s take a look at a few different ways of working with EncryptPad!

Encrypt a plain text file with a key file

The first thing to do is to generate a key and a passphrase, used for encryption and decryption tasks. To do this, click on Encryption > Generate Key, and a second window will appear.

2

Enter the path in which you’d like to store the key file and a name for it. For example, /home/unixmen/mykey.key.

The next step is to enter a passphrase for the key file. EncryptPad will ask if it can use the new key for the current file. Click on Yes. At the end you should see this:

3

Write something, and click Save

4

Change the format from .epd to GnuPG format (.gpg), and save it.
EncryptPad encrypts it and saves on disk. It’s easy to verify that everything went well. Just try opening it with another text editor; the content won’t be readable.

Encrypt a plain text file with a passphrase

The previous section was about encrypting with a key file. In this one we’ll try the encryption based on a passphrase. Open a new file, enter some content, and then click on Save. Chose .gpg format, and save it.

5

Encrypt (which is working as a normal text editor) will ask for a passphrase. Enter one, and go on. At the end:

6

which means that, of course, EncryptPad is using encryption based on the newly generated passphrase.

Open a terminal, go to the folder containing this new file, and execute the following command:

gpg --list-packets "prova.gpg"

GPG will ask for the passphrase (the one created during the saving process), and then will display the following output:

:symkey enc packet: version 4, cipher 9, s2k 3, hash 8
	salt 4a2c5287bcdce48d, count 1015808 (159)
gpg: dati cifrati con AES256
:encrypted data packet: 
	length: 92
	mdc_method: 2
gpg: cifrato con 1 passphrase
:compressed packet: algo=2
:literal data packet:
	mode t (74), created 1743235928, name="_CONSOLE",
	raw data: unknown length

Encrypt a plain text file both with passphrase and key file

Open a new text file… now EncryptPad is in unprotected plain text mode.

Click on the Generate key button and create a new key. Select the ‘Key In Repository’ option and enter some name (just the name, no path and no extension).

EncryptPad will ask to enter a passphrase to protect the key file on the disk. The new file containing the key can be found on $HOME/.encryptpad folder.

Save the text file in the .epd format. The program will ask for the passphrase previously created. After entering it, the new file will be protected with both modes.

7

It may be cumbersome to enter the path to the key file every time, especially if it is not in the repository. To make it easier you can enable

Persist key location in encrypted file

(this feature is only supported in the EPD file type).

Click the Set key button

8

Enable the persistent key location, and click Ok.

9
From now on, you will not be asked for the key file location as it is included in the encrypted file itself. It is also hidden from unwanted view because the location is encrypted with your passphrase.

Conclusion

With this simple yet powerful editor, it’s easy to work with encrypted files. With the same options its also possible to encrypt/decrypt binary files, like images or archives. So, why not play around with it!