VNC server on Ubuntu with login screen after boot

Source:

http://askubuntu.com/a/676978

http://seb.so/vnc-from-boot-without-logging-in-ubuntu-lubuntu-xubuntu-and-mint-lmde/

The problem:

have a VNC server installed on Ubuntu and have it starting at boot with access to the login screen (Note: Vino, the default VNC server available on Ubuntu does not seem to allow access to the login screen after boot).

The solution:

1)

sudo apt-get install x11vnc
sudo x11vnc -storepasswd /etc/x11vnc.pass

2a)

for Ubuntu 15.04+ (systemd)

sudo nano /lib/systemd/system/x11vnc.service

and enter the following code

[Unit]
Description="x11vnc"
Requires=display-manager.service
After=display-manager.service

[Service]
ExecStart=/usr/bin/x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :0 -auth guess -rfbauth /etc/x11vnc.pass
ExecStop=/usr/bin/killall x11vnc
Restart=on-failure
Restart-sec=2

[Install]
WantedBy=multi-user.target

then

sudo systemctl daemon-reload
sudo systemctl start x11vnc
sudo systemctl enable x11vnc

2b)

for Ubuntu 12.04-14.10 (upstart with lightdm, if another display manager like GDM is used then the path here below must be changed accordingly)

sudo nano /etc/init/x11vnc.conf

and enter the following code

start on login-session-start
script
/usr/bin/x11vnc -xkb -auth /var/run/lightdm/root/:0 -noxrecord -noxfixes -noxdamage -rfbauth /etc/x11vnc.pass -forever -bg -rfbport 5900 -o /var/log/x11vnc.log
end script

then reboot.

Tunnel via SSH

The VNC port (5900) is not usually open for access from outside the local network, but if the machine has SSH access this can be used to tunnel the VNC connection. The easiest way is to use a VNC client that supports SSH tunneling (no further configurations needed on server or client). Remmina on Linux and TightVNC on Windows are good choices.

VNC server on Ubuntu with login screen after boot

Installing and updating software using the OSGeo4W/CYGWIN command line

See:

Installing QGIS using apt on windows (OSGeo4W)

Examples:

Install the package “apt” (at the moment only available on the 32bit version of OSGeo4W). Sample commands:

apt setup 
apt upgrade 
apt update 
apt install packagename

The above commands can be used also in a batch file to make automatic updates, for that a couple of system variables need to be set:

@echo off
set OSGEO4W_ROOT=C:/OSGeo4W
set PATH=%OSGEO4W_ROOT%bin;%PATH%
apt update
apt install qgis-dev
pause

Using the CYGWIN console, setting the system variables becomes:

export OSGEO4W_ROOT=C:/OSGeo4W
export PATH=$OSGEO4W_ROOT/bin:$PATH

Adding c:\cygwin\bin and/or c:\OSGeo4w\bin to the “path” environment variable also helps in executing useful commands from the command line, also remotely.

CygWin does not have out of the box an apt like application, but this can be easily added:

wget raw.github.com/transcode-open/apt-cyg/master/apt-cyg
chmod +x apt-cyg
mv apt-cyg /cygdrive/c/cygwin/bin
apt-cyg install packagename

A different method:

An alternative is to use directly the parameters that the osgeo4w installer has available. Example:

osgeo4w-setup --arch x86_64 --advanced --root d:\your-osgeo4w --local-package-dir %TEMP%\osgeo4w --site http://download.osgeo.org/osgeo4w --autoaccept --quiet-mode --packages qgis-dev

or even simply

osgeo4w-setup.exe -q -P packagename

The same will work also for installing cygwin packages

setup-x86.exe -q -P packagename
Installing and updating software using the OSGeo4W/CYGWIN command line

Ubuntu 14.04/15.10 MultiSeat configuration

The problem:

One Ubuntu Desktop PC with two graphic cards,  two LCDs, two mouses and keyboards and configure it in a way that it can be used by two different users at the same time, each one using its own set own lcd/mouse/keyboard/usb ports, etc.

The the graphic cards available in the PC are the Intel based available on the motherboard and an NVIDIA as a PCIe card expansion.

Notes:

Documentation is not entirety missing (just Google for it and see) but it often old or very technical and in general the impression is that is a very complicated setup, but at least for simple configurations it is not.

The pages that were fundamental to understand how to solve the problem are:

http://en.softmaker.kz/software-for-everyone/multiseat-setup-on-ubuntu-1404.html

https://wiki.ubuntu.com/MultiseatTeam/Instructions

The solution:

  • Detecting video cards and other devices (especially mouses, keyboard and USB devices)
  • Setting up rules for Multiseat mode

Step1 (not sure is necessary at all, at least it won’t hurt):

sudo apt-add-repository ppa:ubuntu-multiseat/ppa
sudo apt-get update
sudo apt-get upgrade

Step2:

Detect videocards and USB devices

lspci | grep VGA
lsusb

and find unique identification number of each device and port. The list is long so it will be exported to a text file

udevadm info --export-db > /home/user/udevadm.txt

Each user will have its own SEAT. A default seat always exist and is called seat0, each additional seat must be called seat-1, seat-2, seat-3, etc.

Devices must be configured to belong to seats other than seat0, all the other devices will be automatically granted to seat0.

Step3:

Examine the file and look for the “DEVPATH” of the devices you want to grant to seats other than seat0. Example:

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0
E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0
E: DRIVER=nouveau
E: ID_MODEL_FROM_DATABASE=GT218 [GeForce 210]
E: ID_PCI_CLASS_FROM_DATABASE=Display controller
E: ID_PCI_INTERFACE_FROM_DATABASE=VGA controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=VGA compatible controller
E: ID_VENDOR_FROM_DATABASE=NVIDIA Corporation
E: MODALIAS=pci:v000010DEd00000A65sv00001043sd0000852Dbc03sc00i00
E: PCI_CLASS=30000
E: PCI_ID=10DE:0A65
E: PCI_SLOT_NAME=0000:01:00.0
E: PCI_SUBSYS_ID=1043:852D
E: SUBSYSTEM=pci
E: USEC_INITIALIZED=8621

Step4:

Create a udev rule file that defines the devices granted to seats other than seat0

sudo nano /etc/udev/rules.d/99-multiseat.rules

An example of such file is:

# ************************ SEAT-1 ************************

# USB port function mouse for seat-1
TAG=="seat", DEVPATH=="/devices/pci0000:00/0000:00:14.0/usb1/1-4*", ENV{ID_SEAT}="seat-1", TAG+="seat-1"

# USB port function keyboard for seat-1
TAG=="seat", DEVPATH=="/devices/pci0000:00/0000:00:14.0/usb1/1-3*", ENV{ID_SEAT}="seat-1", TAG+="seat-1"

# USB 2.0 hub for seat-1
TAG=="seat", DEVPATH=="/devices/pci0000:00/0000:00:14.0/usb2*", ENV{ID_SEAT}="seat-1", TAG+="seat-1"

# Videocard function GeForce for seat-1
TAG=="seat", DEVPATH=="/devices/pci0000:00/0000:00:01.0/0000:01:00.0*", ENV{ID_SEAT}="seat-1", TAG+="seat-1"

NOTE: NO NEW LINES IN RULES

Step5:

Create a new configuration file for LightDM

sudo nano /etc/lightdm/lightdm.conf

that must contain

[LightDM]
logind-load-seats=true

Step6:

restart

Step7:

check the list of seats and the list of devices granted to each seat

loginctl list-seats
loginctl seat-status seat0
loginctl seat-status seat-1

Issues:

The above setup works fine, but on Ubuntu 14.04 is not free of issues:

On seat-1, the seat that uses the NVIDIA graphic card, the user can have the system completely froze when using LibreOffice when doing some particular text selection operations. This apparently is not an issue that is related to multiseat configurations, as the bug reports found on-line suggest, see for example

http://askubuntu.com/questions/671487/ubunty-14-04-freezes-when-i-select-a-big-chunk-of-text-in-libreoffice

The solution seems to be to install the proprietary NVIDIA drivers to replace the Open Source ones (Nouveau) but this leads to a bigger issue after rebooting: no graphic output on seat-1.

As suggested in

https://wiki.ubuntu.com/MultiseatTeam/Instructions

I added the “master-of-seat” configuration, at no avail.

The problem seems to be in the packages that allow the user to switch from Intel to NVIDIA output, but after a lot of effort I wasn’t able to figure how to solve the issue.

As a last resort I installed Ubuntu 15.10 instead of 14.04, applied the above configuration and in this case using Nouveau drivers there are no issues with LibreOffice on seat-1.

Ubuntu 14.04/15.10 MultiSeat configuration

OpenStreepMap: import data into a PostGIS database and incrementally update it

The problem:

Load OpenStreetMap data into a PostGIS database and have it automatically updated in an incremental way (so at each update will be loaded just the changes that meanwhile had happened in the OSM servers).

The solution:

There are (really) a lot tools available to handle OSM data, and in a few cases they are just alternatives to do the same thing. The documentation of this tools (osm2pgsql, osmupdate, osmfilter, osmconvert, osmium, osmosis, imposm, etc.) is good but not great (and often not up to date), especially when it comes to “recipes” explaining how to put them together to achieve some specific goal. The following notes -to solve the aforementioned problem- work, but is to be considered that probably there is a much more straightforward and clean way to solve it. Special mention this tutorial that was fundamental to understand how to achieve the goal.

The solution passes trough a series of steps that can be wrapped into one or more scripts and then scheduled with CRON.


Phase 1, downloading, preparing and importing the data

Step 1: get OSM data

Daily updated pbf/osm (and shapefiles) datasets -clipped along national borders of most of countries- can be downloaded at Geofrabrik OSM download facility. In the same page are available daily updated datasets with the data of each continent.

Another option is to download the full Planet file, that includes the entire (world) OSM dataset.

Step 2: limit the data from the Planet to a specific area of interest

The following steps can be used to import and update the full Planet, but this will take a long time (depending also on hardware resources of the server/computer where the operations are being done). More probably it would only needed a specific, limited geographic region.

The Planet dataset can then be clipped using a polygon file (.poly) that can be easily created from a shapefile using this QGIS plugin.

The --complete-ways option is not used because is not supported by osmupdate (to be used later) so ways crossing the clip boundaries are removed. For this reason is important that the .poly file must represent an extent slightly bigger than the real area of interest.

The .o5m format is used for the output because the update operation (by osmupdate) is faster if compared to the same operation done with .pbf or .osm files as input.

$ osmconvert --verbose planet-latest.osm.pbf -B=portugal.poly -o=planet-portugal.o5m

Step 3: filter the dataset and leave just features with a specific OSM attribute/tag

In this specific example only the roads that are tagged as “highway” are the ones to be imported in the database. This tag name is misleading as “highway” means all the roads that can be used by cars, even the smallest ones. Roads/ways that is not possible to use by car are tagged as “pedestrian“.

$ osmfilter --verbose planet-portugal.o5m --keep= --keep-ways="highway" --out-o5m > portugal_estradas.o5m

Step 4: remove broken references

References to nodes which have been excluded because lying outside the geographical borders of the area of interest need to be removed with the option --drop-broken-refs.

The --b=-180,-90,180,90 option defining a global bounding box seems superfluous, but is actually necessary to circumvent a bug in the --drop-broken-refs task that would leave only nodes in the data

$ osmconvert --verbose portugal_estradas.o5m -b=-180,-90,180,90 --drop-broken-refs -o=portugal_estradas_nbr.o5m

Step 5: import the data into PostGIS using osm2pgsql

The important bit here is to use the --slim flag, otherwise later it will not be possible to update this database in an incremental way

$ osm2pgsql --flat-nodes flat_nodes.bin --slim --create --cache 16000 --number-processes 12 --hstore --style openstreetmap-carto.style --multi-geometry portugal_estradas_nbr.o5m -H localhost -d databasename -U username --proj 32629

Phase 2, updating the data

When a Planet file is downloaded it is already old because a new version is published only once a week (and changes in the OSM servers are continuous). So after the first import the data in the database can be immediately updated.

Step 6: update the dataset

With osmupdate we update the dataset obtained in Step 2

$ osmupdate --verbose planet-portugal.o5m planet-portugal-updated.o5m -B=portugal.poly

Step 7: filter the dataset and leave just features with a specific OSM attribute/tag

$ osmfilter --verbose planet-portugal-updated.o5m --keep= --keep-ways="highway" --out-o5m > portugal_estradas_updated.o5m

Step 8: remove broken references

$ osmconvert --verbose portugal_estradas_updated.o5m -b=-180,-90,180,90 --drop-broken-refs -o=portugal_estradas_updated_nbr.o5m

Step 9: create the DIFF file

The DIFF file (in .osc format) contains only the differences between two OSM datasets

$ osmconvert --verbose portugal_estradas_updated_nbr.o5m --diff --fake-lonlat -o=diff.osc

Step 10: import the DIFF file

$ osm2pgsql --flat-nodes flat_nodes.bin --slim --append --cache 16000 --number-processes 12 --hstore --style openstreetmap-carto.style --multi-geometry diff.osc -H localhost -d databasename -U username --proj 32629

Final notes:

Wrap steps 6 to 10 into a batch file and schedule it with CRON to get a fully automatic way to have a continuously updated PostGIS database with OSM data.

The speed of the above process gets a huge boost if it can be done on a SSD rather than a HDD.

OpenStreepMap: import data into a PostGIS database and incrementally update it

ogr2ogr, PostGIS and Spatialite: create 3D vectors and transfer geometries Z values to attributes table

Sample data: dataset3dvectors.zip

Create a 3D shapefile from a 2D one

first check what ogrinfo says about the sample data:

$ ogrinfo -so shapefile_2d.shp shapefile_2d
INFO: Open of `shapefile_2d.shp'
      using driver `ESRI Shapefile' successful.

Layer name: shapefile_2d
Geometry: Point
Feature Count: 5
Extent: (484628.515471, 4227667.038043) - (617462.831823, 4605830.183414)
Layer SRS WKT:
PROJCS["WGS_1984_UTM_Zone_29N",
    GEOGCS["GCS_WGS_1984",
        DATUM["WGS_1984",
            SPHEROID["WGS_84",6378137,298.257223563]],
        PRIMEM["Greenwich",0],
        UNIT["Degree",0.017453292519943295]],
    PROJECTION["Transverse_Mercator"],
    PARAMETER["latitude_of_origin",0],
    PARAMETER["central_meridian",-9],
    PARAMETER["scale_factor",0.9996],
    PARAMETER["false_easting",500000],
    PARAMETER["false_northing",0],
    UNIT["Meter",1]]
z_value: Integer (10.0)

Now generate a real 3D shapefile by transferring the Z values stored in the “z_value” column in the attribute table to the output geometries:

$ ogr2ogr -f "ESRI Shapefile" shapefile_3d.shp shapefile_2d.shp -zfield z_value

Check the result with ogrinfo:

$ ogrinfo -so shapefile_3d.shp shapefile_3d
INFO: Open of `shapefile_3d.shp'
      using driver `ESRI Shapefile' successful.

Layer name: shapefile_3d
Geometry: 3D Point
Feature Count: 5
Extent: (484628.515471, 4227667.038043) - (617462.831823, 4605830.183414)
Layer SRS WKT:
PROJCS["WGS_1984_UTM_Zone_29N",
    GEOGCS["GCS_WGS_1984",
        DATUM["WGS_1984",
            SPHEROID["WGS_84",6378137,298.257223563]],
        PRIMEM["Greenwich",0],
        UNIT["Degree",0.017453292519943295]],
    PROJECTION["Transverse_Mercator"],
    PARAMETER["latitude_of_origin",0],
    PARAMETER["central_meridian",-9],
    PARAMETER["scale_factor",0.9996],
    PARAMETER["false_easting",500000],
    PARAMETER["false_northing",0],
    UNIT["Meter",1]]
z_value: Integer (10.0)

The same would apply also when the input layer is a line or a polygon one.

Transfer Z values from a 3D shapefile to the table of attributes of a 2D one

points:

$ ogr2ogr shapefile_with_z_from_3d.shp shapefile_3d.shp -dialect sqlite -sql "SELECT *, st_z(geometry) AS z_geom FROM shapefile_3d"

force output as 2D, add the “-dim 2” flag

$ ogr2ogr shapefile_with_z_from_3d.shp shapefile_3d.shp -dim 2 -dialect sqlite -sql "SELECT *, st_z(geometry) AS z_geom FROM shapefile_3d"

lines:

$ ogr2ogr shapefile_lines_with_z_from_3d.shp shapefile_3d_lines.shp -dim 2 -dialect sqlite -sql "SELECT *, st_z(st_pointn(geometry,1)) AS z_geom FROM shapefile_3d_lines"

polygons:

$ ogr2ogr shapefile_polygons_with_z_from_3d.shp shapefile_3d_polygons.shp -dim 2 -dialect sqlite -sql "SELECT *, st_z(st_pointn(ST_ExteriorRing(geometry),1)) AS z_geom FROM shapefile_3d_polygons"

Rationale: for polygons is first needed to get the exterior ring (as a linestring) of the geometries with the ST_ExteriorRing function, and then for both lines and polygons is needed to get Nth point (in this examples the first one) of the linestring with the ST_PointN function. After that the Z value is extracted with ST_Z.

Import a 2D Shapefile into PostGIS as 3D vector

$ ogr2ogr -progress --config PG_USE_COPY YES -f PostgreSQL PG:"host=*** port=5432 dbname=*** password=*** user=curso" /path/to/shapefile_2d.shp shapefile_2d -lco SCHEMA=public -lco GEOMETRY_NAME=geom -lco FID=gid -nln 3d_vector -zfield z_value

check the result with ogrinfo:

$ ogrinfo -so PG:"host=** user=** dbname=** password=***" 3d_vector                                      INFO: Open of `PG:host=localhost user=curso dbname=curso password=curso'
      using driver `PostgreSQL' successful.

Layer name: 3d_vector
Geometry: 3D Point
Feature Count: 5
Extent: (484628.515471, 4227667.038043) - (617462.831823, 4605830.183414)
Layer SRS WKT:
PROJCS["WGS 84 / UTM zone 29N",
    GEOGCS["WGS 84",
        DATUM["WGS_1984",
            SPHEROID["WGS 84",6378137,298.257223563,
                AUTHORITY["EPSG","7030"]],
            AUTHORITY["EPSG","6326"]],
        PRIMEM["Greenwich",0,
            AUTHORITY["EPSG","8901"]],
        UNIT["degree",0.0174532925199433,
            AUTHORITY["EPSG","9122"]],
        AUTHORITY["EPSG","4326"]],
    UNIT["metre",1,
        AUTHORITY["EPSG","9001"]],
    PROJECTION["Transverse_Mercator"],
    PARAMETER["latitude_of_origin",0],
    PARAMETER["central_meridian",-9],
    PARAMETER["scale_factor",0.9996],
    PARAMETER["false_easting",500000],
    PARAMETER["false_northing",0],
    AUTHORITY["EPSG","32629"],
    AXIS["Easting",EAST],
    AXIS["Northing",NORTH]]
FID Column = gid
Geometry Column = geom
z_value: Integer (10.0)

Then from the PostgreSQL console add a new empty column to the PostGIS 3D vector:

ALTER TABLE 3d_vector ADD COLUMN z_geom integer;

and then populate it with the Z values from the geometries:

UPDATE 3d_vector SET z_geom = st_z(geom);

For Spatialite the notes are the same as for PostGIS.

ogr2ogr, PostGIS and Spatialite: create 3D vectors and transfer geometries Z values to attributes table

Schedule Netgear ReadyNAS Ultra 2 backups with RSYNC over SSH

The problem:

The ReadyNAS Ultra 2 allows create scheduled backup jobs on remote RSYNC servers, using its web based control panel, but not over SSH, that could be very well the only option.

The solution:

Enable root SSH access to the ReadyNAS Ultra 2 unit and (using the command line) setup a CRON job.

Step1:

Enable root SSH on the ReadyNAS unit by installing the following extension: http://www.readynas.com/?p=4203

[Note: the “root” user password is the same as the one of the “admin” user]

Connect via SSH to the NAS

$ ssh root@IP_OF_NAS

Step2:

Generate public and private SSH keys (use no passphrase)

$ ssh-keygen

Step3:

Copy the public key on the remote server

$ ssh-copy-id -i ~/.ssh/id_rsa.pub remote_server_user@IP_OF_REMOTE_SERVER

Step4:

Edit crontab and add a scheduled RSYNC task

$ crontab -e

after issuing the above command it may very well show an error like

$ netgear-nas:~# crontab -e
 no crontab for root - using an empty one
 /bin/sh: /usr/bin/editor: No such file or directory
 crontab: "/usr/bin/editor" exited with status 127

this is because the default command line text editor is not correctly configured. Switching to “vi” is needed

$ EDITOR=/bin/vi; export EDITOR

now trying again

$ crontab -e

should work and crontab can be edited with vi

[Note: “vi” is hard for the less skilled user, but basic commands are easy enough to do simple edits to text files]

Step5:

Add a RSYNC task to crontab, save and exit

30 17 * * * rsync /nas/folder remote_server_user@IP_OF_REMOTE_SERVER:/path/where/place/backup/

Final note:

Better test the above RSYNC task directly from the command line and check if it will still ask for the password, it should not. If it still asks for the password the likely issue is because of wrong permissions of the

~/.ssh/authorized_keys

file on the remote host, ~/.ssh should be 700, and ~/.ssh/authorized_keys should be 600. If it still ask for the password check the permissions on ~ (should be 700 too).

See:

http://unix.stackexchange.com/questions/36540/why-am-i-still-getting-a-password-prompt-with-ssh-with-public-key-authentication

and/or

http://www.daveperrett.com/articles/2010/09/14/ssh-authentication-refused/

Schedule Netgear ReadyNAS Ultra 2 backups with RSYNC over SSH

Install WordPress under Ubuntu and enable automatic updates

Install MySQL and PHP5 MySQL extension:

sudo apt-get install mysql-server php5-mysql

Initialize the cluster and run a script to make MySQL more secure:

sudo mysql_install_db
sudo mysql_secure_installation

Enter MySQL console as root, create the DB, create the user and grant it the necessary permissions:

mysql -u root -p
CREATE DATABASE wordpress;
CREATE USER wordpressuser@localhost IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON wordpress.* TO wordpressuser@localhost;
FLUSH PRIVILEGES;
exit

Install WordPress as usual, after that add the following two lines to wp-config.php:

define('FS_METHOD','direct');
define( 'WP_AUTO_UPDATE_CORE', true );

and set the permissions of the WordPress folder/files as following:

sudo find /path/to/wordpress/ -type f -exec chmod 664 {} \;
sudo find /path/to/wordpress/ -type d -exec chmod 775 {} \;
sudo chgrp -R www-data /path/to/wordpress/

To backup automatically the MySQL database, first create a .my.cnf and then chmod it to 600. The file must include:

[client]
user=mysqluser
password=mysqlpassword

Then add a CRON job as

mysqldump --defaults-file=/path/to/.my.cnf -u mysqluser databasename --add-drop-table -h host | bzip2 -c > /path/to/backupfile-`date +\%Y\%m\%d`.sql.bz2
Install WordPress under Ubuntu and enable automatic updates