Sunday, December 21, 2014

How to create a swap file

Linux can be configured to use swap space, aka secondary disk storage, when physical memory is running low. Swap spaces can be allocated as disk partitions ('swap partitions') or as files ('swap files'). While swap partitions are generally preferred over swap files, if your system is a virtual private server (VPS) without a pre-configured swap partition, creating a swap file may be your only option. The following procedure describes how to create a swap file.

List swap spaces

Before you create a swap file, you should first check whether the system has any swap space pre-allocated. The easiest way is to run the free command.

$ free -h total used free shared buffers cached Mem: 497M 490M 6.2M 0B 14M 101M -/+ buffers/cache: 375M 121M Swap: 0B 0B 0B

The line labeled Swap above tells you that there is no swap space configured.

Alternatively, run the swapon command with the -s parameter:

$ sudo swapon -s Filename Type Size Used Priority

I prefer free because root privilege is not required to run the command.

Create swap file

Follow the steps below to create and activate a swap file.

  1. Create a new file pre-allocated with the desired file size.
    $ sudo fallocate -l 500M /var/swap.img

    The above command pre-allocates 500 megabytes for the file /var/swap.img.

  2. Secure the new file.
    $ sudo chmod 600 /var/swap.img
  3. Make a swap file.

    The following mkswap command sets up /var/swap.img as a swap file.

    $ sudo mkswap /var/swap.img Setting up swapspace version 1, size = 511996 KiB no label, UUID=a0a90414-adab-4c50-8b27-0d27f0c34448
  4. Activate the swap file.
    $ sudo swapon /var/swap.img

    After executing the above swapon command, verify that the swap file is indeed enabled.

    $ free -h total used free shared buffers cached Mem: 497M 464M 32M 0B 14M 104M -/+ buffers/cache: 346M 151M Swap: 499M 34M 465M $ sudo swapon -s Filename Type Size Used Priority /var/swap.img file 511996 35184 -1

    According to the above output, the swap file has been enabled. However, unless you complete the next step, the swap file will be disabled when you reboot the machine.

  5. Update file system table.

    Add the swap file to the file system table using the following command:

    $ sudo sh -c 'echo "/var/swap.img none swap sw 0 0" >> /etc/fstab'

Wednesday, December 3, 2014

How to change system timezone

When you initially install Linux, you specify the machine's timezone. After the install, you can manually change the timezone. The following procedure applies to Debian and Ubuntu systems.

Before you change the timezone, let's find out what timezone your system is currently in.

$ date Tue Dec 2 13:53:11 PST 2014

The above date command tells you that the system is on PST, aka Pacific Standard Time.

You can change the timezone interactively or through batch processing.

Interactive setup

The following command guides you through 2 screens to configure the timezone.

$ sudo dpkg-reconfigure tzdata

The advantage of specifying the timezone interactively is that you don't have to know the exact name of the timezone. The program will guide you to select your target timezone. But, if you want to automate the process through a shell script, please follow the batch method as explained below.

Batch setup

  1. Identify the name of the target timezone.

    Timezone data files are stored in the /usr/share/zoneinfo directory tree. Each continent has a corresponding subdirectory, e.g., /usr/share/zoneinfo/America. Each continent subdirectory contains timezone files named by cities in the continent, e.g., /usr/share/zoneinfo/America/Vancouver.

    $ ls /usr/share/zoneinfo/America

    Note the city where your system is located (or the nearest city in the same timezone). The timezone identifier is the concatenated continent and city names, e.g., America/Vancouver.

  2. Specify the timezone in /etc/timezone.
    $ sudo -s sh -c 'echo America/Vancouver > /etc/timezone'
  3. Run configure program.
    $ sudo dpkg-reconfigure -f noninteractive tzdata

Monday, November 24, 2014

Free on-line Introduction to Linux course

In August 2014, more than 300,000 people registered for the first offering of the Introduction to Linux course. This popular Massive Open Online Course (MOCC) is taught by the Linux Foundation, and hosted on edx. The same course starts again on January 5, 2015.

The course is designed for people who have limited or no previous exposure to Linux. Despite that, I have enrolled in it, thinking that I will pick up some new knowledge anyway. Because it is self-paced (and free), if it proves to be too easy, I will just skip the course content.

If you are interested, please go enroll at edx today.

Thursday, November 20, 2014

How to split an image for visual effects

Suppose that you've just taken a panorama photograph with your fancy digital camera.

You can display the picture as is on your blog. Or you can be a little bit more creative. How about splitting it up into 3 rectangular pieces?

Or even into 2 rows like the following.


To crop a photo into rectangular pieces, use the convert program from the ImageMagick software suite. If your system runs on Debian or Ubuntu, install ImageMagick like this:

$ sudo apt-get install imagemagick

The original panorama image (P3190007.JPG) is 4256 x 1144 pixels in width and height respectively. The following command crops the image into tiles of 1419 x 1144 pixels. The output files are named tile_, and numbered sequentially starting from 0.

$ convert -crop 1419x1144 P3190007.JPG tile_%d.JPG $ ls -al tile* -rw-r--r-- 1 peter peter 337615 Nov 19 21:45 tile_0.JPG -rw-r--r-- 1 peter peter 300873 Nov 19 21:45 tile_1.JPG -rw-r--r-- 1 peter peter 315006 Nov 19 21:45 tile_2.JPG

The convert program can automatically calculate the width and height dimensions of the output tiles. You simply tell it the number of columns and rows. For example, '3x1@' means 3 columns and 1 row.

$ convert -crop 3x1@ P3190007.JPG tile_%d.JPG

If you want to stitch the component images back together, execute the following command:

$ convert tile_*.JPG +append output.JPG

The +append parameter tells convert to join the images side by side. If, for whatever reason, you want to stack them up vertically, specify -append instead.

Saturday, October 25, 2014

Tools for checking broken web links - part 2

Part 1 of this 2-part series on Linux link checking tools reviewed the tool linkchecker. This post concludes the series by presenting another tool, klinkstatus.

Unlike linkchecker which has a command-line interface, klinkstatus is only available as a GUI tool. Installing klinkstatus on Debian/Ubuntu systems is as easy as:

$ sudo apt-get install klinkstatus

After installation, I could not locate klinkstatus in the GNOME menu system. No problem. To run the program, simply execute the klinkstatus command in a terminal window.

For an initial exploratory test run, simply specify the starting URL for link checking in the top part of the screen (e.g., http://linuxcommando.blogspot.ca), and click the Start Search button.

You can pause link checking by clicking the Pause Search button, and review the latest results. To resume, click Pause Search again; to stop, Stop Search.

Now that you have reviewed the initial results, you can customize subsequent checks in order to constrain the amount of output that you need to manually analyze and address afterward.

The program's user interface is very well designed. You can specify the common parameters right on the main screen. For instance, after exploratory testing, I want to prevent link checking for certain domains. To do that, enter the domain names in the Do not check regular expression field. Use the OR operator (the vertical bar '|') to separate multiple domains, e.g., google.com|blogger.com|digg.com.

To customize a parameter that is not exposed on the main screen, click Settings, and then Configure KLinkStatus. There, you will find more parameters such as the number of simultaneous connections (threads) and the timeout threshold.

The link checking output is by default arranged in a tree view with the broken links highlighted in red. The tree structure allows you to quickly determine the location of the broken link with respect to your website.

You may choose to recheck a broken link to determine if the problem is a temporary one. Right click the link in the result pane and select Recheck.

Note that right clicking a link brings up other options such as Open URL and Open Referrer URL. With these options, you can quickly view the context of the broken link. This feature would be very useful if it worked. Unfortunately, clicking either option fails with the error message: Unable to run the command specified. The file or folder http://.... does not exist. This turns out to be an unresolved linkchecker bug. A workaround is to first click Copy URL (or Copy Referrer URL) from the right click menu, and then paste it into a web browser to open it manually.

The link checking output can be exported to a HTML file. Click File, then Export to HTML, and select whether to include All or just the Broken links.

Below is a final note to my fellow non-US bloggers (I'm blogging from Canada).

If I enter linuxcommando.blogspot.com as the starting URL, the search is immediately redirected to linuxcommando.blogspot.ca, and stops there. To klinkstatus, blogspot.com and blogspot.ca are 2 different domains, and when the search reaches an "external" domain (blogspot.ca), it is programmed to not follow links from there. To correct the problem, I specify linuxcommando.blogspot.ca instead as the starting URL.

Monday, October 20, 2014

Tools for checking broken web links - part 1

With a growing web site, it becomes almost impossible to manually uncover all broken links. For WordPress blogs, you can install link checking plugins to automate the process. But, these plugins are resource intensive, and some web hosting companies (e.g., WPEngine) ban them outright. Alternatively, you may use web-based link checkers, such as Google Webmaster Tools and W3C. Generally, these tools lack the advanced features, for example, the use of regular expressions to filter URLs submitted for link checking.

This post is part 1 of a 2-part series to examine Linux desktop tools for discovering broken links. The first tool is linkchecker, followed by klinkstatus which is covered in the next post.

I ran each tool on this very blog "Linux Commando" which, to date, has 149 posts and 693 comments.

linkchecker runs on both the command line and the GUI. To install the command line version on Debian/Ubuntu systems:

$ sudo apt-get install linkchecker

Link checking often results in too much output for the user to sift through. A best practice is to run an initial exploratory test to identify potential issues, and to gather information for constraining future tests. I ran the following command as an exploratory test against this blog. The output messages are streamed to both the screen and an output file named errors.csv. The output lines are in the semicolon-separated CSV format.

$ linkchecker -ocsv http://linuxcommando.blogspot.com/ | tee errors.csv

Notes:

  • By default, 10 threads are generated to process the URLs in parallel. The exploratory test resulted in many timeouts during connection attempts. To avoid timeouts, I limit subsequent runs to only 5 threads (-t5), and increase the timeout threshold from 60 to 90 seconds(--timeout=90).
  • The exploratory test output was cluttered with warning messages such as access denied by robots.txt. For actual runs, we added the parameter --no-warnings to write only error messages.
  • This blog contains monthly archive pages, e.g., 2014_06_01_archive.html, which link to all actual content pages posted during the month. To avoid duplicating effort to check the content pages, I specified the parameter --no-follow-url=archive\.html to skip archive pages. If needed, you can specify more than one such parameter.
  • Embedded in the website are some external links which do not require link checking. For example, links to google.com. I can use the --ignore-url=google\.com parameter to specify a regular expression to filter them out. Note that, if needed, you can specify multiple occurrences of the parameter.

The revised command is as follows:

$ linkchecker -t5 --timeout=90 --no-warnings --no-follow-url=archive\.html --ignore-url=google\.com --ignore-url=blogger\.com -ocsv http://linuxcommando.blogspot.com/ | tee errors.csv

To visually inspect the output CSV file, open it using a spreadsheet program. Each link error is listed on a separate line, with the first 2 columns being the offending URLs and their parent URLs respectively.

Note that a bad URL can be reported multiple times in the file, often non-consecutively. One such URL is http://doncbex.myopenid.com/(highlighted in red). To make easier the inspection and analysis of the broken URLs, sort the lines by the first, i.e. URL, column.

A closer examination revealed that many broken URLs were not URLs I inserted in my website (including the red ones). So, where do they come from? To solve the mystery, I looked up their parent URLs. Lo and behold, those broken links were actually URL identifiers of the comment authors. Over time, some of those URLs had become obsolete. Because they were genuine comments, and provided value, I decided to keep them.

linkchecker did find 5 true broken links that needed fixing.

If you prefer not to use the command line interface, linkchecker has a front-end which you can install like this:

$ sudo apt-get install linkchecker-gui

Not all parameters are available on the front-end for you to directly modify. If a parameter is not on the GUI, such as skip warning messages, you need to edit the linkchecker configuration file. This is inconvenient, and a potential source of human error. Another missing feature is that you cannot suspend operation once the link checking is in progress.

If you want to use a GUI tool, I'd recommend klinkstatus which is covered in part 2 of this series.

Tuesday, September 30, 2014

How to redirect sudo output to a file requiring root permission

sudo is the recommended way to execute a command which requires root permission. In effect, the target command takes on the permission of root without having to provide the root password.

Consider the following scenario. In order to save the changes made to the iptables firewall rules, I need to run the following command which outputs the changes to a file with root permission.

$ sudo iptables-save > /etc/iptables/rules.v4 bash: /etc/iptables/rules.v4: Permission denied

Note that sudo responded with the Permission denied error. The problem was that the iptables-save command was run under sudo, but the output redirection to the /etc/iptables/rules.v4 file was handled by the shell and hence under the non-root user.

To overcome the problem, you can write a simple shell script and run the script using sudo like this:

$ cat > myscript.sh #!/bin/sh iptables-save > /etc/iptables/rules.v4 $ chmod +x myscript.sh $ sudo myscript.sh
If you don't want to write a script, the following are some alternatives.
  • $ sudo sh -c "iptables-save > /etc/iptables/rules.v4"
  • $ echo 'iptables-save > /etc/iptables/rules.v4' | sudo bash
  • $ sudo iptables-save|sudo tee /etc/iptables/rules.v4 >/dev/null

Tuesday, September 23, 2014

Upgrade from Fedora 19 to 20 using fedup

The recommended upgrade method for Fedora is to use the fedup tool. Below is my experience in following the fedup procedure to upgrade from Fedora 19 to 20. The upgrade was done over the Internet ("network upgrade") instead of from a local DVD media.

  1. Back up all important data in the system.
  2. Verify that the hard disk has sufficient disk space.

    Fedup first downloads the version 20 packages while the system is still running version 19. Therefore, the hard drive must have enough disk space to hold packages of both versions during the upgrade process. For my system, storing the version 20 packages requires about 2 GB.

  3. Perform a full system update under Fedora 19, and reboot to ensure that the system has the latest kernel changes.
    $ sudo yum update $ sudo reboot
  4. Install fedup client.

    The fedup client downloads over the Internet the boot image required to run the upgrade as well as the packages to be upgraded. It sets up the system to run the upgrade at the next boot.

    $ sudo yum install fedup
  5. Run fedup client.
    $ sudo fedup --network 20

    The above command downloads over the Internet (from the Fedora mirror system) all needed packages to upgrade to Fedora 20. It took almost an hour for my system to download everything. You should always verify that the install was successful by checking the fedup log file, /var/log/fedup.log.

    My first upgrade attempt appeared stalled towards the end of the download. So, I terminated the program with a Control-C. The fedup log file revealed a problem with downloading gnupg.

    [ 4130.971] (II) fedup.cli:start_meter() download gnupg-1.4.18-1.fc20.i686.rpm [ 4131.107] (II) fedup.yum:log_grab_failure() http://www.muug.mb.ca/pub/fedora/linux/updates/20/i386/gnupg-1.4.18-1.fc20.i686.rpm: [Errno 14] HTTP Error 416 - Requested Range Not Satisfiable

    I reran the command, and it went further than before but still failed with the error message Downloading failed: Didn't install any keys.

    The log file revealed that the offending key was RPM-GPG-KEY-rpmfusion-nonfree-fedora-20.

    [ 122.225] (--) fedup.yum:_retrievePublicKey() Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-nonfree-fedora-20 [ 122.266] (II) fedup.yum:_GPGKeyCheck() repo 'rpmfusion-nonfree' wants to import key /etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-nonfree-fedora-20 [ 122.267] (II) fedup.yum:check_keyfile() checking keyfile /etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-nonfree-fedora-20 [ 122.268] (DD) fedup.yum:check_keyfile() keyfile owned by package rpmfusion-nonfree-release-0:19-1 [ 122.271] (DD) fedup.yum:check_keyfile() package was signed with key cd30c86b [ 122.272] (II) fedup.yum:check_keyfile() REJECTED: key cd30c86b is not trusted by rpm [ 122.273] (II) fedup.yum:_GPGKeyCheck() no automatic trust for key %s [ 122.273] (II) fedup:message() Downloading failed: Didn't install any keys [ 122.274] (DD) fedup:<module<() Traceback (for debugging purposes):

    To solve the key problem, I manually imported the key using the following command:

    $ sudo rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-nonfree-fedora-20

    Then, I ran the fedup command for the third time.

    $ fedup --network 20 setting up repos... No upgrade available for the following repos: fedora-chromium-stable getting boot images... .treeinfo.signed | 2.1 kB 00:00 setting up update... finding updates 100% [=========================================================] verify local files 100% [======================================================] testing upgrade transaction rpm transaction 100% [=========================================================] rpm install 100% [=============================================================] setting up system for upgrade Finished. Reboot to start upgrade. Packages without updates: .... NOTE: Some repos could not be contacted: fedora-chromium-stable If you start the upgrade now, packages from these repos will not be installed.

    The command completed with an informational message No upgrade available for the following repos: fedora-chromium-stable. The cause of the message is that Fedora 20 does not include Chromium in its official repository. I decided to ignore the message, and continued with the upgrade. As a result, Chromium will not be automatically upgraded. However, after the upgrade is finished, I can manually upgrade Chromium from an unofficial repository or install Google Chrome instead.

  6. Reboot the system.
    $ sudo reboot

    Note that a new entry, System Upgrade, is added to the GRUB menu. This is the default entry, and will be automatically selected. The actual upgrade took about 1 hour for my system.

    After the upgrade is complete, the system automatically reboots into Fedora 20.

  7. Login.

    Now that Fedora 20 is running, login and run the following command to display the version information.

    $ lsb_release -a LSB Version: :core-4.1-ia32:core-4.1-noarch Distributor ID: Fedora Description: Fedora release 20 (Heisenbug) Release: 20 Codename: Heisenbug
  8. Install Chrome.

    Instead of upgrading Chromium from an unofficial Fedora repository, I decided to switch to Chrome. Chrome is the free Google browser that is derived from the upstream Chromium project.

    To install Chrome:

    • Browse to the Google Chrome download site.
    • Select to download the appropriate 32 or 64-bit Fedora rpm.
    • Install the rpm

      I first used the rpm command to install the package. It failed because of a dependency problem.

      $ sudo rpm -i google-chrome-stable_current_i386.rpm warning: google-chrome-stable_current_i386.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY error: Failed dependencies: lsb >= 4.0 is needed by google-chrome-stable-37.0.2062.120-1.i386

      To resolve the dependency automatically, I used the yum command as follows:

      $ sudo yum localinstall google-chrome-stable_current_i386.rpm

What was your experience in upgrading Fedora? Let us know by entering a comment.

Tuesday, September 16, 2014

How to optimize PNG images

My previous post introduces some tools to optimize JPEG images. The focus of this post is on optimizing PNG images. Two complimentary tools will be presented: optipng, and pngquant. The former, lossless, and the latter, lossy.

optipng

optipng optimizes a PNG file by compressing it losslessly.

The command to install optipng on Debian/Ubuntu is:

$ sudo apt-get install optipng

For Fedora/Centos/RedHat, execute:

$ sudo yum install optipng

To optimize a PNG file named input.png:

$ optipng -o7 -strip all -out out.png -clobber input.png

Notes:

  • Output PNG file.

    By default, optipng compresses the PNG file in-place, hence overwriting the original file. To write the output to a different file, use the -out option to specify a new output file. If the specified output file already exists, the -clobber option allows it to be overwritten. The -clobber is useful if you are running the command more than once.

    Alternatively, replace -out out.png with the -backup option. As a result, optipng first backs up the original input file before compressing the input file in-place.

  • Meta data.

    The -strip all option removes all meta data from the image.

  • Optimization level.

    The -o option specifies the optimization level, which ranges from 0 to 7. Level 7 offers the highest compression, but also takes the longest time to complete. It has been reported that there is a marginal return of improved compression as you increase the optimization level. The results obtained from my own 1-image test confirm that. The tests show that the default optimization level of 2 is pretty good, and that higher levels do not offer a big increase in compression.

    Optimization level Compression time
    (Seconds)
    File Size
    (Bytes)
    % Reduction
    Original N/A 285,420 N/A
    0 0.03 285,012 0.14
    1 3.07 242,548 15.02
    2 5.77 242,548 15.02
    3 10.33 242,175 15.15
    4 17.54 241,645 15.34
    5 34.61 241,258 15.47
    6 35.86 241,645 15.34
    7 71.37 241,258 15.47

pngquant

pngquant uses lossy compression techniques to reduce the size of a PNG image. It converts a 32-bit PNG image to a 8-bit paletted image. More specifically, instead of storing each pixel as a 4-channel, 32-bit RGBA value, each pixel is stored as an 8-bit reference for mapping to a color in a palette. This 8-bit color palette is embedded in the image, and is capable of defining 256 unique colors. The trick then becomes how to reduce the total number of colors in an image without sacrificing too much perceivable quality.

To install pngquant on Debian/Ubuntu:

$ sudo apt-get install pngquant

Note that the pngquant version shipped on Debian Wheezy is obsolete (1.0), and not recommended by the official pngquant web site. The examples below were run on version 2.0.0.

To install pngquant on Fedora/Centos/Redhat:

$ sudo yum install pngquant

To optimize a PNG image:

$ pngquant -o output.png --force --quality=70-80 input.png

Notes:

  • Specify the output image file name using the -o option. Without it, the default output name is the same as the input except that the extension is changed (for example, input-fs8.png).
  • Without the --force option, pngquant will not overwrite the output file if it already exists.
  • Since the introduction of the --quality=min-max option in version 1.8, the number of colors is automatically derived based on the specified min and max quality values. The min and max values range from 0 to 100, 100 being the highest quality.

    pngquant uses only the least number of colors required to meet or exceed the max quality level (80 in the above example). If it cannot achieve even the min quality value (70), the output image is not saved.

Below summarizes the results of optimizing one randomly chosen PNG image. It is not intended to be scientific or conclusive. Rather, I hope to give you an idea of the scale of reduction that is possible.

Quality
min-max
Orig 70-90% 70-80%
File Size
(Bytes)
1,281,420 445,464 376,221
% Reduction - 65.2 70.6

The 2 programs - optipng and pngquant - are not mutually exclusive. You will get the most compression from running pngquant. But if you want to get the last possible 1% or so compression, you may first run pngquant, then optipng.

$ pngquant -o lossy.png --force --quality=70-80 input.png $ optipng -o7 -strip all -out output.png lossy.png

Friday, September 12, 2014

How to optimize JPEG images

Poor load time degrades the user's experience of a web page. For a web page containing large images, optimizing images can significantly improve the load time performance which leads to better user experience. Moreover, if a web site is hosted on a cloud service which charges for cloud storage, compressing images can be financially worthwhile. This post explains the optimization of JPEG images using 2 command-line programs: jpegtran and jpegoptim. My next post introduces tools to optimize PNG images.

jpegtran

jpegtran optimizes a JPEG file losslessly. In other words, it reduces the file size without degrading the image quality. By specifying options, you can ask jpegtran to perform 3 types of lossless optimization:

  • -copy none

    An image file may contain metadata that are useless to you. For example, the following figure shows the embedded properties from a picture taken by a digital camera. Properties such as the Camera Brand and Camera Model can be safely stripped from the picture without affecting image quality.

  • -progressive

    There are two types of JPEG files: baseline and progressive. Most JPEG files downloaded from a digital camera or created using a graphics program are baseline. Web browsers render baseline JPEG images from top to bottom as the bytes are transmitted over the wire. In contrast, a progressive JPEG image is transmitted in multiple passes of progressively higher details. This enables the user to see an image preview before the entire image is displayed in its final resolution.

    For large JPEG images, converting from baseline to progressive encoding often results in smaller file size, and faster, user-perceived load time.

  • -optimize

    This option optimizes the Huffman tables embedded in a JPEG image.

To install jpegtran on Debian/Ubuntu:

$ sudo apt-get install libjpeg-progs

To install jpegtran on Fedora/Centos/RedHat:

$ sudo yum install libjpeg-turbo-utils

To optimize a JPG file:

$ jpegtran -copy none -progressive -optimize SAM_0297.JPG > opt_0297.JPG

Below are the results after running the above command.

File Size Before
(Bytes)
File Size After
(Bytes)
Reduction
(%)
3,119,056 2,860,568 8.3

jpegoptim

jpegoptim supports both lossless and lossy image optimization.

To install the program on Debian/Ubuntu:

$ sudo apt-get install jpegoptim

To install on Fedora/RedHat/Centos:

$ sudo yum install jpegoptim

To specify the same 3 types of lossless optimization as explained above, execute this command:

$ jpegoptim --strip-all --all-progressive --dest=opt SAM_0297.JPG SAM_0297.JPG 4000x3000 24bit N Exiff [OK] 3119056 --> 2860568 bytes (8.29%), optimized.

Notes:

  • The --all-progressive option is a feature introduced in jpegoptim version 1.3.0. The version on Debian Wheezy is only 1.2.3, therefore the option is not available.
  • By default, jpegoptim compresses in place, overwriting the input JPEG image. If you don't want the program to write over the input file, specify an alternative directory using the --dest option.

jpegoptim can also compress an image file using lossy optimization techniques. Specify an image quality from 0 to 100, with 100 being the highest quality (and lowest compression). To compress with 90% image quality, execute:

$ jpegoptim --max=90 --dest=opt SAM_0297.JPG SAM_0297.JPG 4000x3000 24bit Exif [OK] 3119056 --> 2337388 bytes (25.06%), optimized.

The table below summarizes the % of reduction in file size as you decrease the image quality. There is a trade-off between file size and image quality. While reducing image size is a worthwhile goal, you don't want to end up with an image that is not "pretty" enough. You are the final judge of the lowest quality that is acceptable to you. To pick the image quality to use for a specific picture, experiment by incrementally decreasing the image quality (say by 10 each time), visually inspect the output image, and stop when the image quality is no longer acceptable.

Quality 100% 90% 80%
File Size
(Bytes)
3,119,056 2,337,388 1,356,131
% Reduction - 25.0 56.5

Tuesday, September 2, 2014

Create video screencasts using recordmydesktop

Why make a video screen capture, aka screencast? Some possible reasons are:

  • Create a software demo.
  • Capture a video game session for playback later.
  • Keep a record of what transpires on the computer screen.

I create screencasts mainly for recordkeeping. As a result, I don't expect to edit the output video, for example, to obfuscate sensitive data, or to insert call-outs to draw attention to specific screen areas.

The tool I use is the command-line program recordmydesktop. It is a basic, no-frills screencasting program which offers video and audio capture but no editing features.

To install recordmydesktop on Debian/Ubuntu:

$ sudo apt-get install recordmydesktop

Record entire screen

$ recordmydesktop --no-sound --delay 5 -o myfile.ogv

Notes on command options:

  1. --no-sound is specified because I don't want to record a soundtrack for the video.
  2. With --delay 5, actual recording is delayed for 5 seconds after the command is run. This short delay gives you time to properly setup the screen before recording starts. For example, you may want to minimize the window in which you execute recordmydesktop.
  3. By default, the output file is named out.ogv. You can change the file name using the -o option, but the file extension must be ogv. recordmydesktop only outputs ogv video files. If you want to upload the output video to YouTube, use a video converter to convert from ogv to one of the YouTube supported video formats - e.g., avi.

To stop the recording, press the Control-Alt-s keys. This action automatically starts the encoding process. Depending on the length of the video, encoding can take a while to complete. After you stop the recording, you cannot resume recording the screen.

To temporarily pause the recording, press Control-Alt-p. You can resume recording by pressing the same keystrokes again.

Record a region

$ recordmydesktop --fps 15 --width 1038 --height 777 -x 88 -y 96 --no-sound --delay 5 -o myfile.ogv

--fps specifies the frame rate, the number of frames to capture per second. To avoid choppy videos, use a sufficiently high frame rate. 15 fps is in general good enough.

To capture a rectangular region on the screen, you need to explicitly specify its location and dimensions. The --x and --y options define the offsets in number of pixels from the upper left corner of the screen. The --width and --height options specify the dimensions of the region in number of pixels.

To help you determine the offsets and the dimensions, use the xwininfo command.

  1. Open the target window.

    If the region you want to capture is an existing window (say a browser window), simply open the window.

    Otherwise, open a new window, say a bash terminal. Re-size the terminal window to have the same size as the capture region. Relocate the window to the capture region on the screen.

  2. Run xwininfo without any argument.
  3. Click inside the target window.
  4. Look for the following fields in the command output:
    $ xwininfo ... Absolute upper-left X: 88 Absolute upper-left Y: 96 ... Width: 1038 Height: 777 ...

GUI screencast programs

If you prefer a GUI-based app, you may use gtk-recordmydesktop (a front-end for recordmydesktop), or kazam.

gtk-recordmydesktop can be installed like this:

$ apt-get install gtk-recordmydesktop

With the GUI front-end, you no longer need to manually calculate the dimensions, or the x and y offsets of the capture region. Instead, simply click and drag to define a rectangular region.

In summary, recordmydesktop is a nifty screencasting tool which allows you to video-capture your screen with or without sound. However, it does not have the editing features to add text call-outs or obfuscate sensitive screen contents - features which are critical for creating video tutorials. If you are aware of a video capture tool that does both recording and editing, please make a comment.

Tuesday, August 26, 2014

Building a firewall for a Debian web server

This post addresses how to configure the Linux firewall to protect a Debian-based web application server. While there are GUI tools for the job, we will focus on the command-line tool iptables.

The scenario is that you have just installed Debian (or Ubuntu) on a server connected to the Internet. This will be used as a web server hosting your WordPress blog. I assume that you already have Apache, and WordPress installed. Please refer to my earlier post for instructions on how to install WordPress on Debian.

Basic Requirements

Before we build the firewall, let's write down the basic requirements - the types of traffic the machine will accept and those it will drop.

  • Accept all outbound traffic (from server to the Internet).
  • Accept all traffic from the loopback(lo) interface, which is necessary for many applications.
  • Accept inbound ssh logins.
  • Accept inbound Web requests.
  • Accept inbound ping requests.
  • Log firewall-specific warnings.

Build firewall

Please follow the order of the steps below. The procedural order is designed to minimize the chance of locking yourself out by mis-configuring the firewall.

  1. Log in to the server either on the physical console or remotely via ssh.

    The physical console is better because then you don't need to worry about being locked out. However, it is not always possible to access the console because the machine may be sitting afar in a data center.

  2. Examine the current firewall configuration.
    $ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Note the policies for INPUT, FORWARD, and OUTPUT. The typical default firewall is configured to accept all traffic, both inbound and outbound.

  3. Flush the firewall.

    Flush only if:

    • your firewall is not 'clean' - it has existing rules, and
    • the INPUT policy is ACCEPT.

    If the INPUT policy is not ACCEPT, you can make it so like this:

    $ sudo iptables -P INPUT ACCEPT

    To flush the firewall:

    $ sudo iptables -F

    Now, we are ready to add the firewall rules, one by one. Note that they comprise the basic rules to satisfy our stated requirements. Not included are specific rules to thwart common Internet attacks.

  4. Add rule # 1.
    $ sudo iptables -I INPUT 1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

    The above rule accepts any incoming traffic that is part of, or related to, an existing connection. If you are currently logged in via a remote ssh session, this rule prevents you from being locked out. So, it is important that you create it first.

  5. Add rule # 2.
    $ sudo iptables -I INPUT 2 -i lo -j ACCEPT

    This rule accepts all traffic from the loopback interface (localhost/127.0.0.1).

  6. Add rule # 3.
    $ sudo iptables -I INPUT 3 -m conntrack --ctstate NEW -p tcp --syn --dport 80 --j ACCEPT

    This rule accepts all new incoming WordPress connections to port 80.

  7. Add rule # 4.
    $ sudo iptables -I INPUT 4 -m conntrack --ctstate NEW -p tcp --syn --dport 22 --j ACCEPT

    Rule # 4 accepts all new incoming ssh sessions to port 22.

  8. Add rule # 5.
    $ sudo iptables -I INPUT 5 -p icmp --icmp-type echo-request -m limit --limit 2/second -j ACCEPT

    This rule accepts incoming ping echo requests at the maximum rate of 2 requests per second.

  9. Add rule # 6.
    $ sudo iptables -I INPUT 6 -m limit --limit 2/min -j LOG --log-prefix "INPUT:DROP:" --log-level 6

    All incoming traffic that are not accepted by any prior rule get logged at a maximum rate of 2 entries per minute. The default log file is /var/log/messages. For easy identification, the log entries are prefixed with the string 'Input:Drop:'.

  10. Change default INPUT and FORWARD policies to DROP.

    With the policy change, all incoming traffic not explicitly accepted by any of the above rules are dropped.

    $ sudo iptables -P INPUT DROP $ sudo iptables -P FORWARD DROP

Your basic firewall is complete. You can view the newly created firewall rules via the following:

$ sudo iptables -v -L Chain INPUT (policy DROP 147 packets, 51908 bytes) pkts bytes target prot opt in out source destination 1304 487K ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere ctstate NEW tcp dpt:httpflags: FIN,SYN,RST,ACK/SYN 0 0 ACCEPT tcp -- any any anywhere anywhere ctstate NEW tcp dpt:sshflags: FIN,SYN,RST,ACK/SYN 0 0 ACCEPT icmp -- any any anywhere anywhere icmp echo-request limit: avg 2/sec burst 5 9 2954 LOG all -- any any anywhere anywhere limit: avg 2/min burst 5 LOG level info prefix "INPUT:DROP:" Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 198 packets, 31420 bytes) pkts bytes target prot opt in out source destination

Save firewall

The firewall that you just created will not persist. If you reboot the server, the firewall will revert to the default configuration - accept all traffic - before you made all the above modifications. To save the firewall changes permanently,

  1. Install iptables-persistent package.
    $ sudo apt-get install iptables-persistent
  2. Make an explicit save.

    After you finish modifying the firewall, you need to explicitly save the firewall configuration in the file /etc/iptables/rules.v4 using the command below.

    $ sudo sh -c "iptables-save > /etc/iptables/rules.v4"

Custom log file

The above firewall logs all dropped incoming traffic to the general system log file /var/log/messages. To avoid cluttering the file, I recommend sending the iptables-related log entries to a separate file, say /var/log/iptables.log. This is possible because iptables-related log entries are prefixed with a custom identifier - 'INPUT:DROP:'.

  1. Create a rsyslog rule to redirect firewall log entries.

    A new file /etc/rsyslog.d/10-iptables.conf is created to hold the rsyslog rule.

    $ cat > /etc/rsyslog.d/10-iptables.conf :msg, contains, "INPUT:DROP:" -/var/log/iptables.log & ~

    The first line in the file specifies that if a log entry contains the custom identifier, it is sent to /var/log/iptables.log.

    The second line skips forward to the next log entry, thereby preventing double logging into /var/log/messages.

  2. Restart rsyslog daemon.
    $ sudo service rsyslog restart

Tuesday, August 19, 2014

Using double dash to specify pesky command line parameters

Many Linux commands accept command-line options and positional parameters. The grep command searches for a pattern in a given file. Its 2 positional parameters are the pattern and the filename. It accepts options such as -i which specifies that the search is case insensitive.

An interesting scenario is when a positional parameter has a value starting with a dash ('-'). This makes the parameter indistinguishable from an option. For example, you try to grep the string -tea in a given file.

$ grep -i '-tea' test.txt grep: invalid option -- 't' Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information.

The -tea is interpreted as an option. Consequently, the command failed.

To solve the problem, insert the double dash (--) on the command line to mark the end of all options. Everything that come after the -- marker are interpreted as positional parameters.

$ grep -i -- '-tea' test.txt Beverages -tea and coffee- are served.

Anther example scenario for using the double dash is the deleting of a file with a pesky file name. Suppose you want to delete a file named '-a'.

$ rm -a rm: invalid option -- 'a' Try `rm --help' for more information.

Enter -- to mark where the positional parameters start.

$ rm -- -a

Sunday, August 10, 2014

Tips on GNOME Classic workspaces

In the world of affordable, dual 27-inch monitors, the GNOME virtual screen feature (aka workspaces) may not hold the same prominent position as before. But, to people like myself who still operate machines equipped with only a single 19-inch monitor, using workspaces effectively is still a big productivity booster.

Below are some tips for using GNOME Classic workspaces.

Tip 1: Change # of default workspaces

By default, the GNOME desktop has 4 workspaces available. The default number can be changed. I, for one, would like to decrease the number of workspaces to 2. I hardly ever need more than 2. With 4 workspace icons spanned out at the bottom of the GNOME Classic screen, I often accidentally click on the wrong one.

To change the default number of workspaces, first install the wmctrl tool. (This tool is also featured in my previous post on how to change window manager.)

$ sudo apt-get install wmctrl

To modify the number of workspaces to 2:

$ wmctrl -n 2

Note that the change takes effect immediately. Also, the change persists after you log out of GNOME.

Tip 2: Keyboard shortcuts to move window to another workspace

Move focus to the window that you want to move. This can be achieved by simply clicking inside that window.

Press the Shift+Ctrl+Alt+RightArrow keys to move the window one workspace to the right. Similarly, press the Shift+Ctrl+Alt+LeftArrow keys to move one workspace to the left.

Tip 3: Keyboard shortcuts to switch workspaces

Before I learn of the keyboard shortcuts, I always mouse over to the workspace switcher app at the bottom of the GNOME Classic screen. From there, I can either click a workspace icon, or scroll using the mouse's scroll wheel, to switch to the target workspace.

If you prefer keyboard shortcuts to the mouse,

  • press the Ctrl+Alt+RightArrow keys to switch to the workspace on the right of the current workspace, or
  • press the Ctrl+Alt+LeftArrow keys to switch to the workspace on the left.

Tip 4: Customizing keyboard shortcuts

You can view the entire list of GNOME keyboard shortcuts, and even customize them according to your personal preferences.

  1. Open System Settings.
    $ gnome-control-center

    You can also open System Settings from the user menu in the top right corner.

  2. Open Keyboard / Shortcuts / Navigation.
  3. Change keyboard shortcut.

    To edit a keyboard shortcut, click to select the corresponding row. Note that a new label is displayed: New accelerator. Press the new keyboard shortcut keys.

Monday, August 4, 2014

Chrome Remote Desktop connects from your Android device to Linux

Google recently announced the beta release of the Chrome Remote Desktop for Linux. It allows you to remotely connect to a Linux machine from within the Chrome browser. Judging from the early comments in the Google product help forum, setting up the Chrome Remote Desktop on a Linux machine is still rather quirky for certain configurations. This post details my experience of successfully installing and setting up Chrome Remote Desktop to connect from an Android device to a Debian Wheezy machine.

First, install and setup Chrome Remote Desktop on the Debian machine. Then, install the Chrome Remote Desktop app on the Android device.

Install on Debian

  1. Install Google Chrome.

    The official Debian Wheezy repository includes the Chromium browser, which is the unbranded, open-sourced version of Chrome. To avoid any compatibility issues, download the official Google Chrome browser package directly from the Google product page. Then, install it as follows:

    $ dpkg -i google-chrome-stable_current_amd64.deb
  2. Add Chrome Remote Desktop to Chrome.
  3. Configure virtual desktop.
    • Create script to start virtual desktop.

      Create the file ~/.chrome-remote-desktop-session, which contains the command to start your preferred desktop environment. You may look up the command in the corresponding desktop file located in the /usr/share/xsessions directory. For instance, if the desktop is GNOME, look up the Exec command in /usr/share/xsessions/gnome.desktop.

      $ grep '^Exec=' /usr/share/xsessions/gnome.desktop Exec=gnome-session

      Note that gnome-session - the text after 'Exec=' - is the command to start the virtual session. Insert the command to ~/.chrome-remote-desktop-session as follows:

      $ cat > ~/.chrome-remote-desktop-session exec gnome-session
    • Change screen resolution (optional).

      By default, the screen resolution of the remotely-connected virtual desktop is 1600 x 1200 pixels. To modify the default resolution, append a line to ~/.profile. For instance, to make it 1024 x 768,

      $ cat >> ~/.profile export CHROME_REMOTE_DESKTOP_DEFAULT_DESKTOP_SIZES=1024x768
  4. Install Chrome Remote Desktop daemon.
    • Download the 64-bit Debian package for Chrome Remote Desktop from the Google Chrome Remote Desktop app page.
    • Install the Debian package.

      My first attempt failed due to dependency problems: some required packages were not pre-installed.

      $ sudo dpkg -i chrome-remote-desktop_current_amd64.deb ... dpkg: dependency problems prevent configuration of chrome-remote-desktop: chrome-remote-desktop depends on xvfb-randr | xvfb; however: Package xvfb-randr is not installed. Package xvfb is not installed. chrome-remote-desktop depends on xbase-clients; however: Package xbase-clients is not installed. chrome-remote-desktop depends on python-psutil; however: Package python-psutil is not installed. ...

      To resolve the dependency problems, run apt-get -f install and then re-install the package:

      $ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: python-psutil xbase-clients xvfb The following NEW packages will be installed: python-psutil xbase-clients xvfb ... $ sudo dpkg -i chrome-remote-desktop_current_amd64.deb

      A daemon named chrome-remote-desktop is created.

      $ sudo chkconfig --list |grep chrome chrome-remote-desktop 0:off 1:off 2:on 3:on 4:on 5:on 6:off
  5. Authorize Chrome Remote Desktop.

    When you run the Chrome Remote Desktop app for the first time, you will be asked to grant permission for the app to do its job.

    • Open a new tab in Chrome, and enter chrome://apps/.
    • Click Chrome Remote Desktop icon.
    • Click Continue, then Accept.

  6. Enable remote connections.
    • Click the Get started button in the My Computers box.
    • Click Enable remote connections.

      If the Enable remote connections button does not appear on the screen, make sure that the ~/.chrome-remote-desktop-session file is created.

    • Enter a PIN.

    The Debian machine is now ready to accept remote connections. You should see its hostname - panther - listed in the My Computers box.

    If you encounter any connection error, please examine the log file - /tmp/chrome_remote_desktop_YYYYMMDD_HHMMSS_xxxxxx - for clues on what might have gone wrong.

Install and connect from Android

  1. Install the Chrome Remote Desktop app from Google Play onto your Android device.
  2. Open the app on the Android device.
  3. Click the hostname - panther - to remotely connect to it.
  4. Enter the PIN for authentication to the remote host.

    A virtual session is created.

Monday, July 28, 2014

Mix and match stable and testing releases in Debian

My Debian system runs the current stable Debian distribution Wheezy. However, there are software packages that are only available in the current testing distribution Jessie. Examples are dateutils, and Enlightenment 0.17. Too eager to wait for Jessie to become stable, and too lazy to build these packages from source, I used the apt-pinning technique to coerce APT, Debian's package manager, to install these packages on Wheezy.

Apt-pinning lets you mix and match the different distributions in a Debian release: stable, testing, and unstable. The steps below explain how to specify that all packages should be installed or updated from Wheezy (the stable version), with the exception of dateutils which we 'pin' to Jessie (the testing version).

Disclaimer: Apt-pinning is considered an advanced topic in Debian. Use at your own risk.

  1. Add testing to sources.list.

    The file /etc/apt/sources.list specifies the software releases you want and where to fetch them.

    Assuming that your sources.list file currently links to Wheezy only, it should resemble something like the following, notwithstanding the actual mirror site.

    deb http://deb.vanvps.com/debian/ wheezy main contrib non-free deb http://security.debian.org/ wheezy/updates main deb http://deb.vanvps.com/debian/ wheezy-updates main

    Append the following line to the file:

    deb http://deb.vanvps.com/debian/ jessie main contrib non-free
  2. Specify install priority preferences.
    • Create the file /etc/apt/preferences if it does not exist.

      This file specifies the priority preferences of which Debian version (e.g., testing vs stable) to install for packages.

    • Insert the following lines.
      Package: dateutils Pin: release n=jessie Pin-Priority: 750 Package: * Pin: release n=wheezy Pin-Priority: 700 Package: * Pin: release o=Debian Pin-Priority: -10

      Any package that is not named dateutils, and is not part of Wheezy is matched by the third section with a priority of -10. A negative priority means that the package will never be installed or upgraded. This entails that no package from Jessie other than dateutils will ever be installed or upgraded.

      When a newer version of dateutils becomes available in Jessie, dateutils will be updated. This is because the dateutils package is matched by the first section, which takes precedence over the third section.

      It is highly recommended that you read the man(5) page for apt_preferences to know more about setting preferences.

  3. Update APT repository.
    $ sudo apt-get update
  4. Install dateutils.

    The following command installs dateutils from testing, and also attempts to fulfill its dependencies from testing.

    $ sudo apt-get -t testing install dateutils

To upgrade your system, execute, as usual, sudo apt-get upgrade. dateutils is upgraded from Jessie while all other packages are upgraded from Wheezy.

Sunday, July 20, 2014

How to change window manager for GNOME

If I ask a Linux user what desktop environment he is running, most likely he can tell me the correct answer - GNOME, KDE, Xfce, LXDE, etc. But if I ask him what window manager he is running, I won't be too surprised if he can't answer me. In fact, not long ago, I did not know that myself.

The Window Manager dictates how various visual elements - windows, panes, icons, etc - look, and how users may interact with these elements. There are many window manager choices: Metacity, Mutter, Compiz, Openbox, etc.

The key is that you are not locked in to any window manager. If you don't like your current window manager, change it. This post explains how to change the window manager, specifically for the GNOME desktop environment.

Before we change it, let's find out which window manager is currently running. To do that, you need to install and run a tool named wmctrl.

$ sudo apt-get install wmctrl $ wmctrl -m Name: Metacity Class: N/A PID: N/A Window manager's "showing the desktop" mode: N/A

The above output tells us that Metacity is the current window manager.

The procedure to change the window manager is:

  1. Choose a new window manager, say Mutter.
  2. Install the new window manager.
    $ sudo apt-get install mutter
  3. Change window manager.

    If you just want to try out the window manager, then execute the following command in your desktop environment:

    $ mutter --replace &

    The window manager is switched on-the-fly. However, Mutter does not persist after logging out. When you login to X, the window manager is reverted back to Metacity.

    To make Mutter your new default, create the file ~/.gnomerc like this:

    $ cat >> ~/.gnomerc export WINDOW_MANAGER=mutter

Wednesday, July 16, 2014

Hide command from bash command line history

Linux is known as a very secure operating system. But, it is not going to save us if we voluntarily or unknowingly expose ourselves to unnecessary danger. For instance, a password-authenticated command may allow you to specify the password right on the command-line.

$ mysql -u root -pMyPassword

The command you just executed - with the mysql root password - gets recorded in the shell command history file. For bash, the history file is ~/.bash_history. This is not desirable for security.

$ tail ~/.bash_history ... mysql -u root -pMyPassword $

The most effective solution is to break the bad habit: don't enter any password on the command line. For the above example, make mysql prompt you for the password:

$ mysql -u root -p Enter password: mysql>

Failing that, you can mitigate the security risk by hiding a command from the command line history. Note: the technique below only works for the bash shell. It won't work for zsh, tcsh, etc. (If you do know the trick for these shells, please let us know through comments).

The bash trick is to enter one or more leading spaces before the actual command.

$ mysql -u root -pMyPassword

A leading blank. Just like that, and the command you enter won't be written into the command history file.

Saturday, July 12, 2014

Share the keyboard and mouse using Synergy - part 2/2

Part 1 of this 2-part series covers x2x, a nifty software tool that lets you use the keyboard and mouse of one X terminal to control another. If you want to control more than 1 other machine, or the machines are on different platforms (Linux, Mac OS X, and Windows), Synergy is the tool.

Installation

My primary machine runs Debian 7.5 - aka Wheezy. I want to use the keyboard and mouse of this machine to control my secondary machine which runs Fedora 19. I have both good news and bad news for you - regarding installation.

First, the good news. Both the Debian and Fedora releases have included Synergy in their official package repositories. Installation is as simple as:

# For Debian $ sudo apt-get install synergy # For Fedora $ sudo yum install synergy

Now, the bad news. The 2 Synergy versions from the respective repositories, 1.3.8 for Debian Wheezy and 1.4.10 for Fedora 19, are incompatible. Starting Synergy on the secondary machine generated this error message:

2014-07-09T20:33:52 WARNING: failed to connect to server: incompatible client 1.3 You're using different versions of synergy on the client and server. You should use the same version on all systems.

The solution I recommend is to download the latest stable release - 1.5.0 at the time of writing - directly from the Synergy download site.
(2018-02-17 update: Synergy is no longer free. More precisely, it is no longer free to download. See pricing plan. Debian users can still download for free from official Debian repositories; Windows, macOS users need to pay.)

To install:

# For Debian $ sudo dpkg -i synergy-1.5.0-r2278-Linux-x86_64.deb # For Fedora $ sudo rpm -i synergy-1.5.0-r2278-Linux-i686.rpm

Setup

First, configure Synergy on the primary machine. Then, the secondary machine. The setup procedure is as follows:

  1. Run Synergy.

    You will find the Synergy program under the Accessories menu for Debian GNOME, and Applications/Utilities for Fedora KDE.

  2. Synergy Premium or not.

    Synergy Premium is the non-free version. Not interested.

  3. Specify Server or Client.

    For the primary machine - the one with the keyboard and mouse that you want to use - select Server. Select Client for the secondary machine.

  4. Enable Encryption.

    After you click Finish, which screen to display next depends on whether you selected Client or Server.

  5. Configure Server (For primary machine only).

    The following screen is displayed for configuring the primary machine.

    Click Configure Server.

    By default, the grid contains a single node labeled with the hostname of the primary machine ('panther'). Double click the node to edit its settings.

    Optionally, enter a new Screen Name, say 'Desktop'. Then, add the hostname ('panther') as an alias.

    Next, add a new node representing the secondary machine to the grid. Drag the monitor icon, and drop it at the grid location relative to how it is physically positioned with respect to the primary machine. In my case, the secondary machine is located to the left of the primary machine.

    Edit the still unnamed secondary machine. Give it a meaningful screen name. Add the hostname as the alias.

  6. Configure Client (For secondary machine only).

    Specify the IP address of the Server (the primary machine).

  7. Click Start.

    This starts the actual Synergy Server - or Client - program using the latest configuration data.

Now, you can slide the mouse of the primary machine to the left, crossing the left edge, and into the screen of the the secondary machine. You can return to the primary machine by sliding the mouse to the right, crossing the right edge.

Auto-starting Synergy

Instead of manually launching Synergy, you can set up auto-start for Synergy like any other X application. For GNOME 3, execute the command gnome-session-properties to bring up the Startup Applications Preferences app. For KDE 4, follow this chain of menus: Applications / Settings / System Settings / System Administration / Startup and Shutdown.

Command-line interface

You can launch Synergy using the command-line interface.

For the primary machine:

  1. Prepare the Synergy configuration file (~/.synergy.conf).
    $ cat > ~/.synergy.conf section: screens Laptop: Desktop1: end section: links Desktop1: left = Laptop Laptop: right = Desktop1 end section: aliases Desktop1: panther Laptop: localhost.localdomain end
  2. Start the Synergy Server.
    $ synergys -f

    The -f option means that the program is run in the foreground. The log messages are displayed in the terminal. Very useful for troubleshooting.

For the secondary machine, execute synergyc with the IP address of the primary machine:

$ synergyc -f 192.168.1.103