Saturday, November 16, 2013

Linux Mint - Mate Desktop: Configure transmission-gtk to handle magnet-links in Google Chrome

Objective


The goal of this tutorial is to make Google Chrome to automatically to start transmission-gtk when you click on a magnet-link within your browser.

Motivation


I was using Firefox (and previously the Mozilla Suite) as my favorite Webbrowser for nearly everthing (on Linux, Window and Mac). Recently I experimented with Google Chrome and figured out that it's also a nice alternative to Firefox and in many cases much faster. But one of the drawbacks I encountered on Linux was, that it seems to handle foreign or unknown protocols differently than Firefox. It merely relies on "xdg-open", which is not always configured correctly, for any desktop environments, as in my case.

Prerequisites


Linux Mint 15 - Olivia
Desktop - MATE 1.6
Google Chrome (v31.0.1650.57)
transmission-gtk (v2.77-14031)
xdg-open (v1.0.2)

It might be that the problem occurs also with other Distributions and Versions, but the above is just my current environment.

Solution


Reproduce the problem


1. Start Google Chrome
2. Navigate to an internet site that provides a magnet-link
3. Click on the link
4. If Google Chrome opens just another window or tab, you face the problem

Solve the problem


In contrary to Firefox which handles all the management of external protocol-handlers itself, Google Chrome relies on the underlying system. In this particular environment it's the "xdg-open" script. Unfortunatelly this script does not support MATE a native Desktop environment and therefore calls the "general-handler" for urls which seems to be Google Chrome itself.

In this solution, we establish the external app "transmission-gtk" the to handle magnet-links within Google-Chrome.

1.
Check where the "transmission-gtk.desktop" file can be found. E.g. use the command "locate transmission-gtk.desktop". In my case it's located in "/usr/share/applications/".

2.
Now check the content, by opening the file with an editor of your choice. Be sure that you find the statements

Exec=transmission-gtk %U

and

MimeType=application/x-bittorrent;x-scheme-handler/magnet;

in the file. Ensure that the first statement contains "%U", because this the placeholder for the concrete URL passed through by Google Chrome.

3.
Configure your system that "transmission-gtk" is the default handler for magnet-links by executing the following command in your shell:

$ xdg-mime default transmission-gtk.desktop x-scheme-handler/magnet

4.
Enable "xdg-open" to recognize your MATE desktop as a Gnome Desktop, because it has the same ancestor and therefore is compatible to Gnome, but not recognized in the "xdg-open" script, because of the different name.

For this step you need root permission, so be careful what you are doing.
Locate the xdg-open script in your system: "which xdg-open". In my case this leads to "/usr/bin/xdg-open".
Open the file as root (again with your editor of choice) e.g. "sudo vi /usr/bin/xdg-open".
Search for a section (Note: The following part is just hack and not a solid solution):

if [ x"$DE" = x"" ]; then
    DE=generic
fi

and change it to

if [ x"$DE" = x"" ]; then
    DE=gnome
fi

Now, save the file.

5.
Restart Gnome and retry to reproduce the problem. It should be gone now and transmission-gtk should be opened to handle magnet-links instead of Google Chrome.

Friday, August 2, 2013

Ruby Version Management: Using rbenv in favor of rvm

Objective


As I am still considering myself as a beginner with the Ruby programming language. I'd like to write a little "HowTo" on installing and managing the versions of the Ruby interpreter on my local machine as a personal reminder.

Since I always used the "Ruby Version Manager (rvm)" for doing the job, I decided this time to take the chance with ruby 2.0 and rails 4.0 to experiment with "Ruby Environment (rbenv)" which is also recommended on the Ruby on Rails homepage to be used as a substitute for rvm. With using rvm in the past I also always got confused by system installation using sudo and installation just for the current user. Hopefully, I can escape this mess using rbenv.

Prerequisites


At the moment, I am sitting in front of my MacBookAir with an installed Mountain Lion OS X (10.8.x) and only the default Ruby version running, which is "ruby 1.8.7".

To start the process of installation, I moved on from the RoR homepage to

https://github.com/sstephenson/rbenv

where I expected to find some instructions to start with.

Yeah! The documentation there seems to cover all the aspects of rbenv. So I'll just leave it by the link to the documentation and only write about my experience installing rbenv and ruby 2.0.

Installation


Additional Prerequisites


After reading the instructions, I decided to install rbenv via the source distribution on github instead of using homebrew. While starting the process I figured out that I even didn't install the Mac Vi Editor "mvim" yet, but still had an "alias vi='~/ApplicationsMacVim-snapshot63/mvim'" in my "~/.bash_profile". Must have been some leftover from a previous MacOS X installation. Therefore I quickly jumped to the developers site of macvim and installed the proper version. After adjusting the 'alias-command' I could proceed with installing rbenv.

Note: Ensure that after changing the "alias" you restart your shell or at least re-read your "~/.bash_profile" by the command "source ~/.bash_profile". With the command "alias" you can list all "aliases" and check if everything is correct.

To avoid confusion in the next following paragraphs "Step X" always refers to the number of the step from the original instructions of installing rbenv.

Step 2


I also couldn't follow Step 2 of the installation process, because I am already using a customized ~/.bash_profile file and the instructions just will put another export statement for the $PATH variable at the end of the ~/.bash_profile. Therefore, I adjusted my ~/.bash_profile manually by exchanging the following line

export PATH=${PATH}:${HOME}/bin

by

export PATH=$HOME/.rbenv/bin:${PATH}:${HOME}/bin

Step 3


I also decided to add Step 3 handcrafted, keeping in mind that the 'export PATH=...' must appear earlier in the file than 'eval "$(rbenv init -)"'.

Step 5


At Step 5, I switched over to the installation guide for ruby_build plugin, which surprisingly was written by the same author as rbenv, Sam Stephenson. The installation as a plugin for rbenv worked like a charm, so I thought I am able to install ruby 2.0 now.

Hum, but how do I have to choose the ruby version exactly?

Just typing "rbenv install" without any parameter gives an overview about the options that can be used with the install command and as I already expected, there was an option to list all available Ruby versions: "rbenv install -l" which gave me ...

2.0.0-dev
2.0.0-p0
2.0.0-p195
2.0.0-p247
2.0.0-preview1
2.0.0-preview2
2.0.0-rc1
2.0.0-rc2

Oops! It seems that I am doomed!

Which version to install?
What is this versioning scheme "dev", "p#", "preview#", "rc#" all about?

I just wanted to install the latest stable version.

After some investigation, I brought light into the dark:

name-suffix meaning
dev# development branch
p# stable version at patch level #
preview# preview version no #
rc# release candidate no #

So, I finally decided to pick version 2.0.0-p247:

$ rbenv install 2.0.0-p247

Surprisingly, the command started to install openssl first on my mac.

Downloading openssl-1.0.1e.tar.gz...
-> https://www.openssl.org/source/openssl-1.0.1e.tar.gz
Installing openssl-1.0.1e...
Installed openssl-1.0.1e to /Users/cschmidt/.rbenv/versions/2.0.0-p247

This unfortunately was not mentioned in any place of the instructions, but gladly it is just installed within the "~/.rbenv" directory itself and therefore does not mess up the system. Finally I got ...

Downloading ruby-2.0.0-p247.tar.gz... -> http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p247.tar.gz Installing ruby-2.0.0-p247... Installed ruby-2.0.0-p247 to /Users/cschmidt/.rbenv/versions/2.0.0-p247

Nicely ruby 2.0.0 can be installed using the clang compiler and does not need to have gcc installed on your Mac, as it's predecessor versions e.g. "ruby 1.9.2"

So, I assume, that ruby 2.0 is installed now. Let's do a last check ...

$ ruby -v

ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0]

Aha, ... ok, there seems to be some configuration left.

Configuration


As I wanted to do this just quick and system wide

$ rbenv global 2.0.0-p247

Check again ...

$ ruby -v

ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-darwin12.4.0]

Finally done.

How to emulate Java "synchronized" keyword in C++

Objective


The objective of this article is to show how to provide the keyword 'synchronized' in C++ that works like the well-known 'synchronized' keyword in Java for locking and unlocking blocks of code.


Example:
Imagine you have a FIFO queue were you can add items from from different producer threads and there is another consumer thread that picks up the items for processing.


Listing 1:

01. public class ItemProcessor {
02.     private ArrayList<Item> queue = new ArrayList<Item>();
03.
04.     public void putItem(Item item) {
05.         synchronized(this) {
06.         queue.add(item);
07.         }
08.     }
09.
10.     public int processItem() {
11.         Item item = null;
12.         synchronized(this) {
13.             if (queue.isEmpty()) {
14.                 return 0;
15.             } else {
16.                 item = queue.(0);
17.                 queue.remove(0);
18.             }
19.         }
20.         return processItem(item);
21.     }
22.
23.     private int processItem(Item item) {
24.         int result = 0;
25.         // ... do something with item and set result
26.         return result;
27.     }
28. }

In Java you can clearly see which parts of the code are synchronized and which are not.

Motivation


During the years working as a professional Software Engineer I saw a lot of C++ code that uses spinlocks or mutexes that have to be locked and unlocked manually. Even in situations where the scope is very clear and stays within a single method or function this could be very error prune.

As most of you know, it's very important to have the lock and unlock calls balanced to prevent deadlocks or race-conditions.

Let's look especially at the more complicated method 'processItem' and how it would look like in C++ with the manual locking and unlocking.

Listing 2:

01. class ItemProcessor {
02.     private:
03.     Mutex mutex_;
04.     std::vector<Item*> queue_;
05.
06.     public:
07.     void putItem(Item* item) {
08.         mutex_.lock();
09.         queue_.push_back(item);
10.         mutex_.unlock();
11.     }
12.
13.     int processItem() {
14.         Item* item = NULL;
15.         mutex_.lock();
16.         if (queue_.empty()) {
17.             mutex_.unlock(); // If missing -> Candidate for a deadlock!
18.             return 0;
19.         } else {
20.             item = queue_[0];
21.             queue_.erase(queue_.begin());
22.         }
23.         mutex_.unlock();
24.         return this->processItem(item);
25.     }
26.
27.     private:
28.     int processItem(Item* item) {
29.         int result = 0;
30.         // ... do something with item and set result
31.         return result;
32.     }
33. }

As you can see, you need the, likely to be forgotten, unlock statement in line 17, to correctly balance your locks and unlocks.


As I am not only a C++ programmer, but also used Java very intensively in the past years, I was always very attracted to see how simple it is to work with synchronized blocks in Java compared to the inconvenient usage of manual locks and unlocks in C++  or Objective-C (Objective-C 2.0 has also "@synchronized" now).

Therefore I thought about a way how to extend C++ by a proper 'synchronized' keyword that is semantically equal to synchronized in Java. Furthermore I was curious if there is even a way to also achieve syntactical equality.

Evolvement of a solution


My first approach is to implement a template class called SynchronizedBlock, that takes a lock class L as a parameter during template initialization. The specific type of the lock class does not matter and it can be either a spinlock or a mutex as long as it provides the instance methods 'void lock()' and 'void unlock()'.

The template class looks like:

Listing 3:

01. template<typename L> class SynchronizedBlock {
02.     public:
03.         SynchronizedBlock(L& lock) : lock_(lock) {
04.             lock_.lock();
05.         };
05.         ~SynchronizedBlock() {
06.             lock_.unlock();
07.         }
08.     private:
09.         lock_;
10. };

If you modify the C++ implementation of the code in Listing 2 and use the new template class SynchronizedBlock as a helper for locking and unlocking from Listing 3, your doing nothing else than following the famous Resource Acquisition is initialization (RAII) pattern invented by B. Stroustrup.

Listing 4:

01. //...
02. int processItem() {
03.     Item* item = NULL;
04.     {
05.         SynchronizedBlock block(mutex_);
06.         if (queue_.empty()) {
07.             return 0;
08.         } else {
09.             item = queue_[0];
11.             queue_.erase(queue_.begin());
12.         }
13.     }
14.     return this->processItem(item);
15. }
16. //...

As you can see in Listing 4, the error prune line 17 as shown in Listing 2 is gone. Manual unlocking is not necessary, because the lock for the mutex of our local variable block is called within the constructor and unlock is called during destruction of block. In this case block is destructed either in line 7 or in line 13 (when the closing blue curly bracket is reached). To make this automatism happen, it is important to create the variable block on the stack and not on the heap (as you can see, there is no 'SynchronizedBlock* block = new SynchronizedBlock(mutex_);' statement). The second important thing you'll notice is that it's necessary to introduce the blue scope brackets to ensure the correct lifetime for our block variable. The destructor must be called before line 14 to ensure semantical equality to the original implementation in Listing 2.

Needing the blue curly brackets is the point that was still bugging me. In contrary to the Java 'synchronized' I have to define the lifetime scope of my synchronized block manually before I can create the block variable. Something, which makes the usage still a bit inconvenient.

Improvement of the syntax


To get rid of this syntactical flaw that is needed to ensure semantical correctness I remembered that C++ allows to declare variables within a for-loop followed by curly brackets for the loop body. I thought, could use this fact to my advantage e.g.

Listing 5:

01. //...
02. for (int i=0, c=5; i<c; ++1) {
03.     // do something
04. }
05. //...

The lifetime of the variables "i" and "c" are exactly until line 4 of Listing 5.

To achieve my goal to have a convenient syntax for synchronized block in C++ that should look like

Listing 6:

01. synchronized(mutex) {
02.     // do something 
03. }

it needed another helper template class called 'SynchronizeGuard' with the following implementation:

Listing 7:

01. template<typename L> class cSynchronizeGuard {
02.     public:
03.         cSynchronizeGuard(L& lock);
04.         ~cSynchronizeGuard();
05.         bool isLocked() const;
06.         void lock();
07.         void unlock();
08.     private:
09.         L& lock_;
10.         volatile bool state_;
11. };
12.
13. inline cSynchronizeGuard::cSynchronizeGuard(L& lock)
14. : lock_(lock), state_(false) {
15.     this->lock();
16. }
17.
18. inline cSynchronizeGuard::~cSynchronizeGuard() {
19.     if(state_)
20.         lock_.unlock();
21. }
22.
23. inline bool cSynchronizeGuard::isLocked() const {
24.     return state_;
25. }
26.
27. inline void cSynchronizeGuard::lock() {
28.     lock_.lock();
29.     state_ = true;
30. }
31.
32. inline void cSynchronizeGuard::unlock() {
33.     lock_.unlock();
34.     state_ = false;
35. }

With the help of the template class SynchronizeGuard and the knowledge about the scope of variables in a for-loop you can express the scope for the block that should be synchronized as follows (again taking the method 'processItem' from Listing 4 as an example):

Listing 8:

01. //...
02. int processItem() {
03.     Item* item = NULL;
04.     for (SynchronizeGuard guard(mutex_); guard.isLocked(); guard.unlock())
05.     {
06.         if (queue_.empty()) {
07.             return 0;
08.         } else {
09.            item = queue_[0];
10.            queue_.erase(queue_.begin());
11.         }
12.     }
13.     return this->processItem(item);
14. }
15. //...

Now, besides the fact, that the synchronization code and the blue curly brackets now have the correct order, this statement seems to be much more inconvenient to write than the code in Listing 4.

Hold on, even not recommended to be overused in C++, we have still the Preprocessor. Let's put that nasty line 4 from Listing 8 into a macro:

Listing 9:

01. #define synchronized(lock) \
02.     if(false) {} \
03.     else \
04.     for (SynchronizeGuard guard(lock); guard.isLocked(); guard.unlock())

Et voilĂ ! Putting all together, the beautified implementation from Listing 2 can be expressed in C++ very similar to the code written in Java (Listing 1):

Listing 10:

01. class ItemProcessor {
02.     private:
03.     Mutex mutex_;
04.     std::vector<Item*> queue_;
05.
06.     public:
07.     void putItem(Item* item) {
08.         synchronized(mutex_) {
09.         queue_.push_back(item);
10.      }
11.     }
12.
13.     int processItem() {
14.         Item* item = NULL;
15.         synchronized(mutex_) {
16.             if (queue_.empty()) {
17.                 return 0;
18.             } else {
19.                 item = queue_[0];
20.                 queue_.erase(queue_.begin());
21.             }
22.         }
23.         return this->processItem(item);
24.     }
25.     // ...
26. };