Home Blog Page 18

How to install Django on Ubuntu (the right way) for novice users

set-up-django-on-ubuntu

About Django

Django is a great web application development framework, especially for the perfectionist with deadlines built into python programming language. It provides a very good structure and easy methods that help to do the heavy lifting when writing web applications.
If you have never used Django before and are looking for an installation guide on your Ubuntu machine then you are in the right place. In this article we will teach you how to set up a very simple Django development environment, step by step.

Some Tools We Need

There are many different ways to install the Django framework in Ubuntu. In order to stay updated with the latest python development industry standards we need to install a few tools before getting Django up and running it on our machine.
The first tool is called pip which is a very practical python package management system widely used by python developers all over the world. I mainly use pip to install packages for my python projects, generate dependencies of a project and also uninstall different python packages from my machine when they are no longer required.

Before installing pip we highly recommend updating the package index on your machine with the help of the command shown below.

sudo apt-get update

The following command can be used to install the pip python package manager inside Ubuntu:

 sudo apt-get install python-pip

Now, you can easily install python packages on your Ubuntu machine by following the syntax shown here:

 pip install name-of-python-package-here

But we do not recommend installing packages yet! Python developers use another tool during the development of their python projects…

This tool is called virtualenv. It is really useful as it helps to manage dependency conflicts between different python projects on the same machine. Essentially, this tool helps to create isolated virtual environments where you can develop without any worries about package conflicts.

To install virtualenv on Ubuntu  the following command can be used:

sudo pip install vitualenv

Create A Virtual Environment For Your Project

Virtualenv can be easily used to create isolated virtual environments for your python projects.
For example, to create a virtual environment under the name of venv1 the following command can be used:

virtualenv venv1

In order to work on an isolated environment it needs to be activated. The following command will make that happen:

 source venv1/bin/activate

But working with virtualenv is a bit annoying as there are many different commands you need to remember and type on your terminal emulator.
The best solution is to install and use virtualenvwrapper which makes working with python virtual environments very easy. Once you have finished installing and configuring virtualenvwrapper working on a virtual environment is as easy as typing the following command:

workon venv1

Now that pip and virtualenv tools are available on your machine it is time to install and configure virtualenvwrapper. To install virtualenvwrapper use pip, as shown below:

pip install virtualenvwrapper

There are a few steps you need to follow in order to do this properly. On your command prompt run the following command:

source /usr/local/bin/virtualenvwrapper.sh

All virtual environments you create will be available inside the directory ~/.virtualenvs.

If you want to keep your virtual environments inside another directory then use the following commands (as shown in the official documentation of the virtualenvwrapper):

export WORKON_HOME=~/Name-Of-DIrectory-Here
mkdir -p $WORKON_HOME
source /usr/local/bin/virtualenvwrapper.sh

I like to keep my virtual environments inside ~/.virtualenvs. My projects are inside ~/projects.
To create a virtual environment just use the command mkvirtualenv as shown below:

 mkvirtualenv linuxtuts

The following output is displayed on my terminal when executing the above command.

 New python executable in linuxtuts/bin/python

 Installing setuptools, pip...done.

To work on the virtual environment linuxtuts use the following command:

workon linuxtuts

Once the virtual environment has been activated your command prompt will change. Mine looks like this:

 (linuxtuts)oltjano@baby:~/Desktop$

As you can see from the output shown above, linuxtuts is the name of the virtual environment we created.

Make the changes permanent

In order for the commands offered by virtualenvwrapper to work every time you open the terminal you will need to make some changes in the .bashrc file.

The .bashrc file is used to set up environment variables, function aliases and lots of other information you will want to have when opening a new terminal window. Read more on .bashrc.

Open the .bashrc file with your text editor, and then copy/paste the following into it and save the file.

export WORKON_HOME=$HOME/.virtualenvs
source /usr/local/bin/virtualenvwrapper.sh

Install Django

Inside the virtual environment use the command ‘which python’ to see the python executable you are using.

which python

The above command produces the following output on my machine. Depending on the name of your directory for the virtual environments you will get a different output, but structured in a very similar way.

 /home/oltjano/.virtualenvs/linuxtuts/bin/python

The above output is telling us that the python executeable being used is inside the virtual environment, which in this case is linuxtuts.

Deactivate the virtual environment with the command deactivate.

deactivate

Remove the virtual environment linuxtuts with the help of the following command:

 rmvirtualenv linuxtuts

The reason we decided to remove linuxtuts is that we did not choose the version of python we wanted to use inside the virtual environment we created.
It can be done with the following command:

 mkvirtualenv -p /usr/bin/python-version-here

In a project I am working on we use python3.2. To run the project I create a special environment for it:

 mkvirtualenv -p /usr/bin/python3.2 linuxtuts

The above command produces the following output:

Running virtualenv with interpreter /usr/bin/python3.2

New python executable in linuxtuts/bin/python3.2
 Installing setuptools, pip...done

Also creating executable in linuxtuts/bin/python

The command prompt will look similar to mine.

(linuxtuts)oltjano@baby:~/Desktop$

You can use any python version required by your project. The installation of Django is very easy. Just run the following command:

pip installl django

The above command produces the following output:

 (linuxtuts)oltjano@baby:~/Desktop$ pip install django

You are using pip version 6.0.7, however version 6.0.8 is available.

You should consider upgrading via the 'pip install --upgrade pip' command.
 100% |################################| 7.4MB 40kB/s

Collecting django
 Downloading Django-1.7.6-py2.py3-none-any.whl (7.4MB)
 Successfully installed django-1.7.6

Installing collected packages: django

To install a different version of Django, specify the version you would like to use, as shown here:

 pip install Django==1.0.4

It is up to you which version of python and Django you want to use for your projects.

Set up your first simple project in Django

After you have finished the installation of Django in a virtual environment, you probably want to start your first project.

Fortunately for us, Django offers management tools for your projects. The django-admin.py tool can be used to start the project.

Create a fresh virtual environment for your new project.

mkvirtualenv -p /usr/bin/python2.7 fresh_project

Note: Depending on the version of python you want to use you need to change the path listed above.

Install a new copy of Django inside your virtual environment.

pip install django

Use the following command to start a fresh Django project.

 django-admin.py startproject fresh_project

Then cd to your project directory and list the files inside your project. The directory structure will be similar to the one shown below.

fresh_project manage.py

The manage.py is another tool you have to work on your Django projects. Each Django project is composed of apps.

You can create a new app with the following command:

python manage.py startapp myapp

If you do an ls now, the directory structure should look like the one shown below:

fresh_project manage.py myapp

It is always a good idea to generate the requirements.txt file for every project you write in Django so that when you show it to your friends or want to run the project in another machine you can easily install the needed packages to run the project.

Create a new file called requirements.txt on the top level directory of your project and append the packages there using the following syntax:

django==1.9.5

As you can see from the above command, you append the package name then add its version. This is very important.

If you do an ls now, inside your project you should get the following:

fresh_project manage.py myapp requirements.txt

Install the requirements for the project on a new machine by using the following command:

 pip install -r requirements.txt

Are you asking yourself how to run the Django project? Just run it using the following command which starts the built-in Django development server:

python manage.py runserver

And then visit http://127.0.0.1:8000/. You will get a message from Django congratulating you on running your first Django project!

Conclusion

Knowledge on tools such as virtualenv and virtualenvwrapper is a must for a python developer. In this article I showed you how to install Django on Ubuntu and also how to set up a very basic Django development environment.

I will publish another tutorial on Django where I will explain in detail how to set up a professional Django development environment in linux.

How to install and configure Piwik on Ubuntu Server 16.04

piwik on ubuntu server

Web analytics – a general overview

Those who own a website probably know what “web analytics” refers to. It’s a general term indicating the study of the impact of a website on its users. A business based on Internet services, like e-commerce, needs to find statistical information about people who visit the site, and this data is provided by web analytics software.
According to W3Techs, the most used is Google Analytics. But, if you don’t want to use a remote-hosted service, you may want to give Piwik a chance.
Piwik is an open source project with advanced privacy features that is executed by your own server.
Interested? This tutorial will explain how to set up Piwik on an Ubuntu Server 16.04 machine:

Prerequisites

If your server is up and running, you probably already have the LAMP stack, but, if you don’t, just follow our tutorial on how to install it on Ubuntu 16.04.
Piwik has the following requirements:

  • Webserver such as Apache, Nginx, IIS, or others;
  • PHP 5.5.9 (recommendend PHP 7);
  • MySQL >5.7, or MariaDB;
  • PHP extensions:
    php5-curl php5-gd php5-cli php5-geoip php5-mysql

Getting started

If your system has all the required components, enter the following commands to download the latest version of Piwik:

cd /var/www/html/
wget http://builds.piwik.org/latest.zip
unzip latest.zip

Assign permissions

In

/var/www/html/

 you should now have a folder named

piwik

. Change its permissions with the following commands:

	chown -R www-data:www-data /var/www/html/piwik
	chmod -R 0755 /var/www/html/piwik/tmp

Configure MySQL

You will need to specify a MySQL user with permission to create and edit tables in the database.
For creating a new one, enter the following commands on your shell:

$ mysql -u root -p
mysql> CREATE DATABASE piwik_database;
mysql> CREATE USER 'piwik'@'localhost' IDENTIFIED BY 'password_here';
mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON piwik_database.* TO 'piwik'@'localhost';
mysql> exit

Note: pikiw_database is the name of a database which should only have Piwik tables.

Now, you can begin to configure Piwik. In your browser, go to

http://localhost/piwik

. You’ll see the following page:
1
Click Next, and you’ll see:
2
In this case, you can see that my system needs the PHP extension mbstrings. After enabling it, just reloading the page should reset it, and then it will be possible to go on.
Next, Piwik will ask you to setup a database. Enter your username and password, and click on Next. Piwik will create the required tables, and check everything.
The following page is really important:
3
The super user is the user that you create when you install Piwik and it has the highest permissions. Enter the required information, and be sure to keep your super user login data safe, because it’s the only way you can access Piwik for the first time.
The super user can perform administrative tasks including adding new websites to monitor, adding users, changing user permissions, and enabling and disabling plugins.

When you finish, click Next

Set up a web site

Enter website URL, name and time zone. This will be the first site Piwik will track. Of course, you can add more once the installation is complete.

4
Last thing to do is setting up a JavaScript Tag. A Tracking code is required to record visitors, visits and page views in Piwik. If you use a CMS, like WordPress, you should use an add-on, extension or plugin to make sure this tag appears where it needs to.

5

Click on Next, and that’s all! You have completed your Piwik installation!

First access

Log in to the dashboard using the super user data entered during the installation process. From the dashboard you can see all tracking data, which will be updated in real-time. It’s also easy to add new sites, if you need, or other users.

Medium and High-Traffic Websites

If you own (or manage) a website with many visitors (more than a few hundred per day), the Piwik team recommends setting up auto-archiving cron task so that Piwik calculates periodical reports. This is due to the fact that, by default, Piwik will recalculate statistics every time a report is visited, slowing down the system and increasing the load on the database.

How to generate and check strong passwords in Linux

strong passwords in linux

Introduction

Different operations require different security layers. Accessing your email account, your social media, your bank account, and a remote server that you administer through SSH all need different security layers, and contain data which hold different “weight”.
But, in order to accessing all these operations, you will always require the same thing: a password.
We all know that a strong password is what you really need in order to be protected from attacks, and, of course, we all know that it’s better to use different passwords for different services.
A very unwise decision (and common mistake) would be using your server password to access Facebook. This decision could cause you lots of trouble.
So how can we easily manage the task of creating strong passwords?
In this tutorial, we will talk about how to generate and check your passwords.

Generate a strong password

A strong password should be composed of a mix of alphabet characters, numbers, and symbols.
A second requirement is to not use known words, birth dates or names, because you would be vulnerable to dictionary attacks.
Another important question to ask: how many characters should a password contain? There is actually no concrete answer, but having more than 16 characters is a great choice.
So, if your system has OpenSSL or GPG, you can use those tools to accomplish the generation task.
For example, with the following command we can generate with GPG:

$ gpg --gen-random --armor 1 32

In my case, just now, the result was:

6lS7cgCyT9vkCZIDQIXcgbXk7bkoVZqdZ0U4HZ4RJw8=

Similarly, with OpenSSL:

$ openssl rand -base64 32

and the output is:

CrUk9dNutlsCErYv5U19ZWP0Pe9GwQgwdDgUNEapXjk=

As you can see, it’s incredibly efficient and also very easy!
Note: Do NOT use the previous passwords! These are just examples.

Checking if your password is strong

Now that we have a password, it’s time to find out if it passes the test: is your password strong enough? Even if someone uses a brute-force attack?
In order to determine of the password is strong enough, we’re going to use cracklib.

Install cracklib on a Linux

To install cracklib on RHEL/Fedora/CentOS, just use yum:

# yum install cracklib

Type the following command to install on Debian-based systems:

# apt-get install libcrack2

So, now we will use

cracklib-check

command.

First, we test a simple password:

$ echo "123abcd" | cracklib-check

If you do this, you’ll get: 

abcd1234: it is too simplistic/systematic

And if you use a normal word?:

$ echo "admin" | cracklib-check
admin: it is based on a dictionary word

Of course, these results are not surprising. Use an everyday English word, and a dictionary based attack would be successful in no time at all.
So, it’s time to check if it was a good idea to generate two passwords!
In this case, we will use a different way of writing the command, so the passwords will not be stored in shell history:

$ cat|cracklib-check

Then, paste:

CrUk9dNutlsCErYv5U19ZWP0Pe9GwQgwdDgUNEapXjk=

You will read:

CrUk9dNutlsCErYv5U19ZWP0Pe9GwQgwdDgUNEapXjk=: OK

In this case, I don’t think anyone would be surprised that this password was given the green light.

openssl

command followed all the rules necessaries for creating a good, strong password.

Password managers

So, that’s all! In this tutorial we have seen how easy it can be to generate and verify a password, but don’t forget to generate a different password for each service! Unfortunately, this leaves you with an assortment of random passwords… how do you remember them all?
Of course, there is software written for this task. A good password manager is what you will need! Happy hunting!

Backup with Percona XtraBackup on Ubuntu 16.04

percona xtrabackup

Introduction

Percona XtraBackup is an open source backup utility for MySQL-based servers that doesn’t lock your database while performing backup. Using it provides the following benefits:

  • Backups are completed quickly and reliably
  • Uninterrupted transaction processing during backups
  • Savings on disk space and network bandwidth
  • Automatic backup verification
  • Higher uptime due to faster restore time

This tool works with MySQL, but also with MariaDB and Percona Server. It supports backup of InnoDB, XtraDB, and HailDB.
In this guide, we’ll install Percona XtraBackup on an Ubuntu 16.04.

Install Percona XtraBackup

First of all, you’ll need access to your server console (of course).
We will use the latest package of this utility, so, add the Percona repository with the following command:

$ wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

Then, install the downloaded package with dpkg:

# dpkg -i percona-release_0.1-4.xenial_all.deb

Now, update the local cache:

# apt-get update

And finally, install Percona XtraBackup:

# apt-get install percona-xtrabackup-24

We have now successfully installed Percona XtraBackup.

Configure a new user and a backup directory

Assuming you have a MySQL compatible database already, access its shell as the root user:

mysql -u root -p

Then, create a new user, called ‘backupusr’, with the password ‘mypwd’:

CREATE USER 'backupusr'@'localhost' IDENTIFIED BY 'mypwd'

Grant this user the privileges “RELOAD, PROCESS, LOCKTABLE, REPLICATION CLIENT”:

GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'backupusr'@'localhost';
FLUSH PRIVILEGES;
exit

Then, create a folder in which you can store the backup, for example:

mkdir -p ~/backup/mysql/

Performing backup

So, now we want to back up our databases. To start the process:

xtrabackup --backup --target-dir=~/backup/mysql/

If the target directory does not exist, ‘xtrabackup’ creates it. If the directory does exist and is empty, xtrabackup will succeed. xtrabackup will not overwrite existing files.

That’s it! you have performed a backup of your database.

NOTE: if the DB data or log files aren’t stored in the same directory, you might need to specify the location of those.

How to install LDAP on CentOS 7

Introduction

LDAP stands for Lightweight Directory Access Protocol and, as the name suggests, it’s a standard protocol for accessing and maintaining distributed directory information services over an IP network.
In this tutorial, we’ll install a LDAP server on Centos 7 using 389 Directory Server.

Getting started

First of all, configure FQDN in /etc/hosts.
In that file, put the server’s fully qualified domain name.

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
X.X.X.X yoursever.hostname.lan server

Of course, substitue the last line in this example with your server’s informations.

Configure firewall

As you may know, we need to allow LDAP server’s default ports via firewall in order to access it from a remote system.
So:

firewall-cmd --permanent --add-port=389/tcp
firewall-cmd --permanent --add-port=636/tcp
firewall-cmd --permanent --add-port=9830/tcp

Now, restart firewalld service.

firewall-cmd --reload

Create an user account

Now, create a new user.

useradd ldapuser

and set a password

passwd ldapuser

Restart CentOS.

Install LDAP Server

Note: you need to have EPEL repository.
Install 389 DS server:

yum install 389-ds-base 389-admin

After installing it, it’s time to configure:

setup-ds-admin.pl

Conclusion

At this point, you have installed and configured everything. You can, of course, tweak something, and, for example, enable directory server and directory admin services automatically on every reboot. It’s up to you!

How to install Gitlab on Debian 8

What is Gitlab?

As stated in official website, “GitLab is an application to code, test, and deploy code together. It provides Git repository management with fine grained access controls, code reviews, issue tracking, activity feeds, wikis, and continuous integration”.
Many big IT-companies, like IBM, Red Hat and StackExchange are using it every day for managing their projects, because it is probably the best solution for build your own git server.

Getting started – Install prerequisites

Suppose you want to install Gitlab Community Edition on your server running Debian 8, behind Nginx proxy. First of all, you have to login as root to your server. Then:

# apt-get -y install curl openssh-server ca-certificates postfix

During installation, you’ll have to configure Postfix.

Add the repository and install

After that, add the Gitlab repository using curl

# curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sh

Now, you can install Gitlab CE with apt

# apt-get -y install gitlab-ce

If everything went fine, you can go to next step, which is Gitlab configuration.

Configure

This is really easy. You have just to run one command.

# gitlab-ctl reconfigure

After that, you have to go to your server hostname (or IP address) with a web browser. You will be redirected to a page that will let you chang the Gitlab admin password.
That’s all. Now, after logging in, you will have access to the Gitlab dashboard, for creating and managing your projects.

How to prevent SSH from disconnecting sessions

Introduction

If you work with SSH, you already know that, after few minutes of inactivity, a session would be closed automatically, for security reasons. In fact, you could have forgotten to shut it down, and someone could take control of your system. But, if you think it’s not your problem, you can change this behaviour in your GNU/Linux configuration. The following commands must be executed from SSH client.

How to stop SSH from disconnecting

The following steps needs to be performed in your SSH client, not in the remote server.

First of all, open your text editor and modify your current user config file, which is located at:

~/.ssh/config

Add the following lines:

Host *
 ServerAliveInterval 60

ensuring that the second line will start with a space.

The first line, tells SSH to apply this configuration for all remote hosts. Of course, you can specify just one of them, changing ‘*’ with the desired host.

After you did this, you need to apply the above settings:

sudo source ~/.ssh/config

To apply this settings globally, add or modify the following line in /etc/ssh/ssh_config file.

ServerAliveInterval 60

Save and close the file.

In this way, your SSH session will not be closed for inactivity.

Conclusion

Remember that the security of your system, expecially in server case, is not secondary, and that is all in your hand. Changing the behaviour of programs like SSH must be done only if you know exactly what you want, and after being sure that this will not put you in some problem.

Upgrading from Fedora 24 to Fedora 25

Introduction

We already talked about the release of Fedora 25, and the improvements it brings. In this article, we will see how to upgrade from Fedora 24.

If you are using Workstation edition with GNOME, you should see a notification about the upgrade. In this case (but even if you didn’t receive te notification), you can upgrade using GNOME Software graphical package manager.

A general way to upgrade

If you want to upgrade a Server edition, or you are just a CLI-user, the shell is there for you. You can complete the task using the powerful DNF system.
First of all, make sure that your installation is up to date, and, of course, that you have a backup of your most important files!
When you are ready, just install the DNF Upgrade plugin and then the new version of Fedora, with the two commands below:

# dnf install dnf-plugin-system-upgrade
# dnf system-upgrade download --releasever=25

That’s all. The system will download all packages; after installation, you will need to reboot your machine, and you’ll be ready to use your new Fedora 25.

Fedora 25 released!

Fedora 25, a lot of improvements

On November 22,Red Hat’s Fedora community Linux distribution debuted its second major milestone release of 2016. Fedora is not just a GNU/Linux distribution, but a global community working to improve free and open source software.

As usual, the community delivered three editions: Fedora 25 Workstation, Fedora 25 Server, Fedora 25 Atomic Host.

Fedora 25 Workstation

This is were big things happens. The most important part is the debut of the Wayland display server as default. After decades, X11 system is ready to leave, replaced by a modern infrastructure, providin a richer experience for graphical environments and a better way to interact with hardware.
The flagship version also includes GNOME 3.22, which bring many enhancements. Developers will find updated tools, and a improved Flatpak support.
And a new entry. In fact, just as stated in the releases note:”Fedora 25 provides the Rust compiler and its Cargo package management tool”.

Fedora 25 Server

This version now includes a new SELinux Troubleshooter module for Cockpit; this helps when a SELinux denial is encountered, without manual workarounds.
And what if you need to see what keys are connecting to a machine? Fedora now will display SSH keys in the Cockpit dashboard!
There are also improvements in FreeIPA, which has been upgraded to 4.4 version.

Getting more informations

For a complete releases note, take a look here.

Remove duplicate files with fdupes

Introduction

Sometimes we all need to find some duplicate file in our system; this is a very tedious task, expecially if we have to do it “by hand”.
If you are a GNU/Linux users (and you if you are read me, you are), you know that, following the UNIX tradition, there is a tool for everything. And so, GitHub has the solution: fdupes. This is a tool written in C and released under the MIT license for identifying duplicate files residing within specified directories.

Getting fdupes

Of course, you could build it from scratch, but it’s not necessary.
On Debian based systems you can find it with APT:

$ sudo apt-get install fdupes

On Fedora, CentOS and RHEL, after enabling epel repository:

# yum install fdupes
# dnf install fdueps

How to use fdupes

Using fdupes is really easy. For finding duplicates, you have just to:

$ fdupes /path/to/some/directory

This command will only look in directory specified as argument, and will print out the list of duplicate files (if there are).
If you to look also in sub-directories, you must add the “-r” option which stands for “recursively”.
And what if you want to see the size of files? Of course you can:

$ fdupes -S /path/to/some/directory

You can specify more than one directory:

$ fdupes /path/to/first/directory /path/to/second/directory

and so on.

Then, if you want to delete all the duplicates, just:

$ fdupes -d /path/to/directory

This will preserve a copy, and delete everything else.

Conclusion

For a complete list of options:

$ fdupes -h

which will print out

Usage: fdupes [options] DIRECTORY...
-r --recurse for every directory given follow subdirectories
encountered within
-R --recurse: for each directory given after this option follow
subdirectories encountered within (note the ':' at
the end of the option, manpage for more details)
-s --symlinks follow symlinks
-H --hardlinks normally, when two or more files point to the same
disk area they are treated as non-duplicates; this
option will change this behavior
-n --noempty exclude zero-length files from consideration
-A --nohidden exclude hidden files from consideration
-f --omitfirst omit the first file in each set of matches
-1 --sameline list each set of matches on a single line
-S --size show size of duplicate files
-m --summarize summarize dupe information
-q --quiet hide progress indicator
-d --delete prompt user for files to preserve and delete all
others; important: under particular circumstances,
data may be lost when using this option together
with -s or --symlinks, or when specifying a
particular directory more than once; refer to the
fdupes documentation for additional information
-N --noprompt together with --delete, preserve the first file in
each set of duplicates and delete the rest without
prompting the user
-v --version display fdupes version
-h --help display this help message

So, now you can clean your filesystem from duplicates!

WireShark 2.2.2: 30 bug fixes

wireshark_logo

WireShark is the most popular network protocol analyzer. Admins use it mostly for troubleshooting, developers for testing new protocols, and it’s also a very good tool for educational purpose.
But, like all software in the world, it contains bugs. if you use the version 2.2.1, you may have found some problem, but even if don’t, you should be glad to know that the team released an update, porting WireShark at 2.2.2 version and bringing 30 security and bug fixes.

Bugfixes

Following, the complete list of bug fixes:

  • TCP: nextseq incorrect if TCP_MAX_UNACKED_SEGMENTS exceeded & FIN true.
  • TCP: nextseq incorrect if TCP_MAX_UNACKED_SEGMENTS exceeded & FIN true.
  • dmg for OS X does not install man pages.
  • Fails to compile against Heimdal 1.5.3.
  • TCP: Next sequence number off by one when sending payload in SYN packet (e.g. TFO).
  • Follow TCP Stream shows duplicate stream data.
  • Dissection engine falsely asserts that EIGRP packet’s checksum is incorrect.
  • IEEE 802.15.4 frames erroneously handed over to ZigBee dissector.
  • Capture Filter Bookmark Inactive in Capture Options page.
  • CLNP dissector does not parse ER NPDU properly.
  • SNMP trap bindings for NON scalar OIDs.
  • BGP LS Link Protection Type TLV (1093) decoding.
  • Application crash sorting column for tcp.window_size_scalefactor up and down.
  • ZigBee Green Power add key during execution.
  • Malformed AMPQ packets for session.expected and session.confirmed fields.
  • Wireshark 2.2.1 crashes when attempting to merge pcap files.
  • [IS-637A] SMS – Teleservice layer parameter -→ IA5 encoded text is not correctly displayed.
  • Failure to dissect USB Audio feature unit descriptors missing the iFeature field.
  • MSISDN not populated/decoded in JSON GTP-C decoding.
  • E212: 3 digits MNC are identified as 2 digits long if they end with a 0.
  • Exception with last unknown Cisco AVP available in a SCCRQ message.
  • TShark stalls on FreeBSD if androiddump is present.
  • Dissector skips DICOM command.
  • UUID (FT_GUID) filtering isn’t working.
  • Manufacturer name resolution fail.
  • packet-sdp.c allocates transport_info→encoding_name from wrong memory pool.
  • Payload type name for dynamic payload is wrong for reverse RTP channels.

Not only fixing bug

Along with these important fixes, WireShark 2.2.2 introduces also updates in 6LoWPAN, AllJoyn, AMPQ, ANSI IS-637 A, BGP, CLNP, DCERPC, DICOM, DTN, E.212, EIGRP, ERF, GVSP, IEEE 802.11, IEEE 802.15.4, IP, ISO-8583, Kerberos, L2TP, LACP, MAC LTE, OpenFlow, Profinet I/O, RTPS, SCTP, SDP, Skype, SMPP, SNA, SNMP, SPNEGO, TCP, USB Audio, XML, and ZigBee protocols support.

Getting WireShark

So, if you need WireShark (and maybe you do), this is a “must-have” update.
Installation packages and source code can be found at the official website download page.

Onda V80 SE Tablet Review

Before you get up in arms about how Unixmen is about Linux and open source, let me just say that these product reviews will be few and somewhat rare.  But when we are sent interesting products to review, we tend to give them a chance.  This time, we are reviewing a Chinese tablet called the Onda V80.  And guess what, we were pleasantly surprised.

20160815162902_85704
For only $90, this large tablet is on par with Amazon Fire Tablet, which is on the lower end of the tablet pricing market.  The differences between the two are quite interesting.  On the one hand, the V80 comes with an odd Chinese Android operating system, which is mixed between English and Chinese.  For the english-speaking sector, out of the box, this can be a bit of a turn off… but there are settings to alleviate this.

Secondarily, this tablet is really fast for the price, and seems to outperform the Amazon Fire Tablet by a mile. It could just be that this unit is stripped down and doesn’t come with a lot of pre-loaded material, but it feels good. In addition, if you wanted to play with the operating system or install something else, this feels like the perfect unit for that.

So let’s do a breakdown:

Negatives:
-Language mix can be confusing upon first open
-Feels a bit light for what it is, experience is very plastic-y

Positives:
-Fast operating system
-Large, high resolution screen
-Speedy web experience
-Inexpensive

In the end, we think this is a perfect tablet for business use on the go.  It has a fast browser experience and in the world of tablets under $100, this one feels well worth it. 

Privilege-escalation bug present for 9 years in Linux

strong passwords in linux

Under active exploit

There is a critical bug, Dirty COW, present virtually on all GNU/Linux distributions, under active exploit since 9 years ago.
Security researchers, while this is a privilege-escalation vulnerability, are taking it extremely serious for many reasons: first of all, it seems that is not so hard to develop an exploit based on it. But there’s another problem: this bug is located in part of the Linux kernel which has been present in almost every distribution of the OS for almost a decade.
Last but not least, researchers have found that the vulnerability is being actively and maliciously exploited.
We know that kernel developers already knew this bug, and that tried to fix it eleven years ago. However, the fix was undone for technical reasons.

In the time we write, a patch has already been written and released from the maintainers of the official kernel tree. Phil Oester, the man who found it, urges to distribute and install the patch.

Red Hat has classified it as important and has already planned to address it in future updates.
For a curious case, this happen in the same week in which a Google researcher showed that the average lifetime of a Linux bug is five years.

Remember to take it seriously and update all the kernels you have control on.

Upgrading to PostgreSQL 9.5 in Fedora 24

What’s new in PostgreSQL 9.5

PostgreSQL is the world’s leading open source database; in its 9.5 version there are a lot of enhancements, here reported without going in depth:

  • IMPORT FOREIGN SCHEMA
  • Row-Level Security Policies
  • BRIN Indexes
  • Foreign Table Inheritance
  • GROUPING SETS, CUBE and ROLLUP
  • JSONB-modifying operators and functions
  • INSERT … ON CONFLICT DO NOTHING/UPDATE (“UPSERT”)
  • pg_rewind

And a lot of other things, which you can find in detail in the PosgreSQL Wiki.
Here, we’ll care just about upgrading from 9.4 to 9.5 Fedora 24.

Back-up your data

This is just an (important) advice: before proceeding in upgrading, back-up all your data.
The procedure for upgrading PostgreSQL is not automatic, so you have to make some operation manually, but, as you will see, it’s very easy.

Let’s upgrade

First of all, install the upgrade subpackage:

$ sudo dnf install postgresql-upgrade

Now, you can use it to upgrade PostgreSQL:

$ sudo postgresql-setup --upgrade

At the end of the procedure, look at

/var/lib/pgsql/upgrade_postgresql.log

log file for useful details, and then start the systemd service:

$ sudo systemctl start postgresql.service

Now, if you:

$ sudo systemctl status postgresql.service

you should see it started and ready.

That’s all, your PostgreSQL is at 9.5 version, with all its new features available.

Fedora 25: On the way of Wayland

This is not just another upgrade

Fedora Project already announced the release of Fedora 25. It has been 13 years since the release of its first version. Since then, with the support (indirect) of Red Hat, and of the GNU/Linux community, a lot of things changed, not only in the project itself; as its users surely remember, Fedora introduced systemd six years ago, while other distributions are still arguing about it. And now, just like then, it’s time for another great change.

Is the end of X11?

The X server has been a “core” part of GNU/Linux for over two decades. First developed in 1984, it should be at its last years.

Changes in Fedora 25

Fedora 25 will come with a lot of changes, here reported schematically:

  • GNOME 3.22
  • New Fedora Media writer
  • Improved Flatpak support in the Software tool
  • Support for the Rust programming language
  • Wayland by default

The last is the most important part. For the first time, X11, while still present in the distro, will not be the default graphic server. In fact, Wayland (which is a required dependence for GNOME since 3.18) with its totally different architecture is on the way to take is place, and Fedora Project, while like most software it still has some bugs, think it’s time to give it a chance to serve users.

You can download and try Fedora 25 Workstation Beta and start testing another important component for the future of GNU/Linux system.

How to configure rsnapshot for local backup

Backup Computer Key In Blue For Archiving And Storage

Introduction

We already talked about the usage of rsync to make (and restore) a backup. Today, we’ll go further, talking about a tool built upon rsync: rsnapshot.
This is a Perl script which gives chance to take snapshots of filesystem at different times. In shortage, the first backup is a “full image” of the filesystem, while then it saves only the differences, intelligently using hard links.
If you use it with SSH, it is possible to use the tool for make backup of remote machines.

Configure

During the installation process, an example config file will be created in /etc. It is called

rsnapshot.conf.default

. The first thing to do is to make this your config file.
Just:

cp /etc/rsnapshot.conf.default /etc/rsnapshot.conf

Now, it’s time to adapt the config file.

Modify the config file

First, we have to choice a place in which save the snapshots. In this example, it will be

/.snapshots/

: this is called snapshot_root.

Note: in the following commands, elements are separated by tabs, not spaces.

Uncomment cmd_cp and make sure that it contain the path to GNU cp.
Then, make the same thing with cmd_rsync, which, as the name suggests, “points” to rsync. If you want also the functionality of a logger, don’t comment out cmd_logger.
So, after setting up these parameters, it’s necessary to decide how often take snapshots, and this is only a users choice, of course. In order to specify how much data to save, we must tell rsnapshot which “intervals” to keep, and how many of each; in this context, an “interval” is the unit of time.
For accomplishing that tasks, check the interval parameter. In the default config file, there are two lines:

interval hourly 6

This means that rsnapshot will make 6 snapshots every day…

interval daily 7

…while this second set of backup will be taken once a day, and stored for a week.

It’s important to note that “hourly”, in config file, is written before “daily”, and this is not arbitrarily: in fact, the first interval line is the smallest unit of time.

Now, we must decide what files we want to back up. For instance, we could want to back up all content of 

/home

. In /etc/rsnapshot.conf, set up the backup parameter, just like this:

backup /home/ localhost/

.

localhost/ is the name of the folder, inside our snapshot_root, which will contain the snapshots of

/home

.
In a similar way, if we want to work with remote machines, backup will contain the full paths to remote filesystems:

backup user@remote.com:/home/ example.com/

In this case, of course, it’s necessary to have SSH configured on our remote systems.

Testing configuration

To test a configuration file is very easy. Users must execute the command:

rsnapshot configtest

and wait for the output, which, if everything went fine, should be “Syntax OK”.

Conclusion

Now that we configured rsnapshot, the last thing will be to automatize the process, configuring cron to execute it periodically. That depends on you.

As said in the introduction, the amount of disk space taken up will be equal to one full backup, plus an additional copy of every file that is changed and, with little overhead due to the hard links.

Ansible Review: How to easily automate your IT infrastructure

Introduction

Ansible is an IT automation tool, which helps in cloud provisioning, configuration management and application deployment.
Developers designed Ansible with multi-tier systems in mind, trying to realize a tool simple, easy to use and with security features provided by OpenSSL and OpenSSH.
It models a multi-node infrastructure in terms of inter-relation between the various components, not just managing one system at a time.
Ansible connects to the infrastructure’s nodes, pushing out “Ansible Modules”, executing them and removing everything when finished. All this work is done through SSH by default, but you can choose Kerberos, if you want.

Installation

Ansible can be installed from source, since his source code is available on GitHub, but it’s also already built in .deb or .rpm.
RPMs are available from yum for EPEL 6, 7, and currently supported Fedora distributions.

$ yum install ansible

If you use Ubuntu, there’s a PPA for it.

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Users can install Ansible also using pip, the Python package manager:

$ sudo pip install ansible

Getting started

Ansible, when speaking with remote machines, assumes by default you are using SSH keys. Though this is the encouraged way, there is also the possibility to use password authentication; if so, users must just pass the

--ask-pass

option.

As stated in Ansible documentation, when using this tool in a “cloud”, it’s better to run it on a machine on that cloud; of course, this is just common sense, but technically you can run it also through the Internet.

First commands

On the machine you use for managing the system, edit the

/etc/ansible/hosts

file, putting in it a list of remote systems you must “control”. In those systems, you have to put your public SSH key, of course.

/etc/ansible/hosts

is an inventory file. It has a INI-like syntax, just like this:

mail.example.com
[mywebservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
four.example.com

You can specify a different path for your inventory, or split it in more files. Do whatever you like!

Connect to your nodes

If you configured everything, it’s time to connect!
Just as an example, you can ping to all nodes with a simple:

$ ansible all -m ping

You can also override the default remote user with the

-u

option, or access in sudo mode with the

--sudo

flag. It’s all in your hand (and your mind, of course). Now your machine should be connected with all the nodes of the infrastructure, so you can interact with them. Syntax for doing this is:

$ ansible all -a "/path/to/command/on/remote/machines"

.
For instance:

ansible -a all "/usr/bin/ls"

.

Conclusions

Ansible is a modern tool which can can change the way in which many sysadmins manage an enterprise, distributed, system. It has a lot of features, and in the next weeks we will go more in depth in analyze them. In this short overview we showed how easy can be to set up the environment and execute a command on remote machines, but this is just a little part of what users can accomplish with this software.

APT: Rebuilding Package in Debian

Package rebuilding is something easily done in Debian. As SysAdmin, you might find yourself in a spot needed to rebuild a Package, maybe to enable a feature or something.

This exactly I will show you how to do in this post. In this post, I will be using

squid3

package. Before we proceed, I need to make sure my repository in

/etc/apt/sources.list

has

deb-src

repository part. With this, you can be able to download source files of package in a repository. Now in my repository, I have this

deb http://security.debian.org/ stable/updates main contrib non-free

This alone won’t get me the source files of

squid3

, I need to add the

deb-src

part, making it look like this

deb http://security.debian.org/ stable/updates main contrib non-free
deb-src http://security.debian.org/ stable/updates main contrib non-free

Saving this and run

$ apt update

Downloading Source Files and Rebuilding

I have added

deb-src

repository and updated my package list, now I can download source files of

squid3
$ sudo apt source squid

screenshot_20160927_234240

After a successful download of source files, I’m going to downloading building dependencies – packages needed to build this package, by running

$ sudo apt build-dep squid3

This should install some packages if they are not installed in your system. After that, we can do any modifications we need to do to the package source files then ready to rebuild!

Modifying Package and Rebuilding

In this post, I will be enabling a

http-violations

in

squid3

by

cd

into the package source folder which I just downloaded and edit the file

debian/rules

. Adding this option

--enable-http-violations

and save it

Building

Before building, I will make sure I have

devscripts

installed.

$ sudo apt install devscripts

With installation done, before building I have to

cd

back into the directory of the source package and then run

$ debuild -uc -us -b

Rebuilding of the package starts, enough debug will be returned to your console which you don’t need to worry about. But when building is done, you see something similar to this at the end:

[...]
dh_gencontrol -psquid  
dpkg-gencontrol: warning: package squid: unused substitution variable ${perl:Depends}
dh_md5sums -psquid 
dh_builddeb -psquid 
dpkg-deb: building package 'squid' in '../squid_3.5.19-1_amd64.deb'.
dh_gencontrol -psquid-dbg  
dh_md5sums -psquid-dbg 
dh_builddeb -psquid-dbg 
dpkg-deb: building package 'squid-dbg' in '../squid-dbg_3.5.19-1_amd64.deb'.
dh_gencontrol -psquidclient  
dh_md5sums -psquidclient 
dh_builddeb -psquidclient 
dpkg-deb: building package 'squidclient' in '../squidclient_3.5.19-1_amd64.deb'.
dh_gencontrol -psquid-cgi  
dh_md5sums -psquid-cgi 
dh_builddeb -psquid-cgi 
dpkg-deb: building package 'squid-cgi' in '../squid-cgi_3.5.19-1_amd64.deb'.
dh_gencontrol -psquid-purge  
dh_md5sums -psquid-purge 
dh_builddeb -psquid-purge 
dpkg-deb: building package 'squid-purge' in '../squid-purge_3.5.19-1_amd64.deb'.
 dpkg-genchanges --build=any,all >../squid3_3.5.19-1_amd64.changes
dpkg-genchanges: info: binary-only upload (no source code included)
 dpkg-source --after-build squid3-3.5.19
dpkg-buildpackage: info: binary-only upload (no source included)
Now running lintian...
N: 1 tag overridden (1 warning)
Finished running lintian.

Package has been successfully rebuilt and keep in the parent directory. You can now reinstall them using

$ sudo dpkg -i squid3*.deb