tag:blogger.com,1999:blog-25259460833674052222024-03-16T08:09:16.232+01:00It's only codeChristian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.comBlogger22125tag:blogger.com,1999:blog-2525946083367405222.post-13881412492745201092020-10-05T20:56:00.000+02:002020-10-05T20:56:10.430+02:00Linux Mint: Install the proper firmware and set-up the USB-BT400 for linux<h2>Objective</h2><br />
I wanted to set up my new <a href="https://www.asus.com/Networking/USBBT400/">USB-BT400</a> adapter working with Linux Mint.<br />
<br />
<h2>Motivation</h2><br />
As a consequence of the actual corona-situation I had to use my headset in order to work from home. Since now, I only had a cheap wired one that I rarely used. Recently, my old headset somehow disassembled itself. Therefore, I thought, maybe it's better to buy a good one instead of a crappy cheap one, because I have to use it at a daily bases from now on. The headset should be of good quality and also fit well regarding UX, which means "no cables" anymore. My decision felt on the <a href="https://https://www.sony.com/electronics/headband-headphones/wh-xb900n/">Sony XB900N</a>. I also wanted to be able to use it not only with my laptop, but also with my desktop computer. Unfortunately, I have no bluetooth adapter assembled on it's mainboard. Therefore I decided to buy a bluetooth adapter, as well. My decision here, felt on the <a href="https://www.asus.com/Networking/USBBT400/">USB-BT400</a>. The adapter and the headset worked well on Windows. I even could use the headset with my laptop out-of-the-box. Unfortunatelly, I recognized, when I wanted to use the bluetooth adapter in combination with the bluetooth headset on my desktop computer running my favorite OS Linux Mint Ulyana, the speakers of the headset work well, but the microphone was not recognized at all. Time to fix this!<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 20.0 Ulyana - Cinnamon (64 Bit)</li>
<li><a href="https://www.asus.com/Networking/USBBT400/">USB-BT400</a></li>
<li>Any bluetooth headset with microphone (e.g. <a href="https://https://www.sony.com/electronics/headband-headphones/wh-xb900n/">Sony XB900N</a>)</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Test if all works</h3><br />
First plug-in the bluetooth adapter into a free USB slot. Turn bluetooth on. Open the bluetooth device manager (e.g. default or blueman) and try find your bluetooth headset. Set it up, try to pair it and connect. If everything works fine, <strong>Good!</strong> Otherwise...<br />
<br />
<h3>Check what the problem is</h3><br />
Open a terminal and type the following command:<br />
<br />
<div class="bash">$> sudo dmesg | egrep -i 'blue|firm'</div><br />
you'll see some output similar to this:<br />
<br />
<div class="code gray-box">[ 0.196061] Spectre V2 : Enabling Restricted Speculation for firmware calls<br>
[ 1.011797] usb 3-6: Product: BCM920702 Bluetooth 4.0<br>
[ 4.962537] Bluetooth: Core ver 2.22<br>
[ 4.962552] Bluetooth: HCI device and connection manager initialized<br>
...<br>
[ 5.169217] Bluetooth: hci0: BCM20702A1 (001.002.014) build 1346<br>
[ 5.172387] bluetooth hci0: Direct firmware load for brcm/BCM20702A1-0b05-17cb.hcd failed with error -2<br>
[ 5.172390] Bluetooth: hci0: BCM: Patch brcm/BCM20702A1-0b05-17cb.hcd not found</div><br />
<br />
<h3>Get the missing firmware and install it.</h3><br />
Navigate to <a href="https://github.com/winterheart/broadcom-bt-firmware/tree/master/brcm">USB-BT400 firmware</a> and download the correct firmware for your device mentioned from the error message above, mine is <code>BCM20702A1-0b05-17cb.hcd</code>
<div class="bash">$> cd /lib/firmware/brcm/<br />
$> wget https://github.com/winterheart/broadcom-bt-firmware/<br />
blob/master/brcm/BCM20702A1-0b05-17cb.hcd<br />
&& sudo mv BCM20702A1-0b05-17cb.hcd /lib/firmware/brcm/</div><br />
you'll see some output similar to this:<br />
<br />
<div class="code gray-box">Resolving github.com (github.com)... 140.82.121.3<br>
Connecting to github.com (github.com)|140.82.121.3|:443... connected.<br>
HTTP request sent, awaiting response... 200 OK<br>
Length: unspecified [text/html]<br>
Saving to: ‘BCM20702A1-0b05-17cb.hcd’<br>
BCM20702A1-0b05-17cb.hcd [ <=> ] 80,47K --.-KB/s in 0,03s<br>
2020-10-05 19:23:04 (2,26 MB/s) - ‘BCM20702A1-0b05-17cb.hcd’ saved [82403]</div><br />
<br />
<h3>Verify that all is fine now</h3><br />
Reboot, and check again.
<div class="bash">$> sudo dmesg | egrep -i 'blue|firm'</div><br />
you'll see some output similar to this:<br />
<br />
<div class="code gray-box">[ 0.196061] Spectre V2 : Enabling Restricted Speculation for firmware calls<br>
[ 1.011797] usb 3-6: Product: BCM920702 Bluetooth 4.0<br>
[ 4.962537] Bluetooth: Core ver 2.22<br>
[ 4.962552] Bluetooth: HCI device and connection manager initialized<br>
...<br>
[ 5.979445] Bluetooth: hci0: BCM20702A1 (001.002.014) build 1467<br>
[ 5.995451] Bluetooth: hci0: Broadcom Bluetooth Device</div><br>
<br />
Open the bluetooth device manager (e.g. default or blueman) and try find your bluetooth headset. Set it up, try to pair it and connect.
Navigate with your browser to a page to <a href="https://webcammictest.com/check-mic.html">test your microphone</a> online. Done!<br /><br />
<h2>References:</h2><ol><li><a href="https://forum.ubuntuusers.de/topic/asus-usb-bt400-usb-adapter-bluetooth-4-0-mit-y/">Ubuntu forum discussing the issue</a></li>
<li><a href="https://github.com/winterheart/broadcom-bt-firmware">Broadcom Bluetooth firmware for Linux kernel</a></li>
<li><a href="https://github.com/winterheart/broadcom-bt-firmware/tree/master/brcm">USB-BT400 firmware</a></li>
<li><a href="https://webcammictest.com/check-mic.html">Online webcam, speaker and microphone test page</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-11557853868620455362019-07-11T23:05:00.000+02:002019-07-11T23:09:01.460+02:00Windows 10: Compile the Lua interpreter from source and install it for the local user<h2>Objective</h2><br />
Compile the latest version of the <a href="https://lua.org/">Lua</a> language interpreter from source (in this case version 5.3.5). Build a locally installed Lua interpreter for the current user only, in order to have the possibility to safely remove it when necessary.<br />
<br />
<h2>Motivation</h2><br />
As I already mentioned in my previous post "<a href="https://itsonlycode.blogspot.com/2019/06/linux-mint-compile-lua-interpreter-from.html">Linux Mint: Compile the Lua interpreter from source and build your own Debian package to install it</a>", I wanted to have a scripting language that smoothly can be integrated and combined with C++. I figured that Lua would be very handy for that job. This time, I didn't just want to have the language interpreter build for my main operating system, which is Linux Mint, but also for Windows 10. I wanted to have the installation build from the same sources for both operating systems. This article describes the process to get a local portable installation of the Lua interpreter on per user basis. It tries to describe how to compile your own Lua interpreter on Windows 10 using the (command line interface) CLI compiler from Microsoft e.g. Visual Studio 2019 Community Edition.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Windows 10 (64 Bit)</li>
<li>Microsoft Visual Studio 2019</li>
</ul><br />
<h2>Solution</h2><br />
I decided to do the whole build and temporary stuff within a <code>"tmp"</code> folder in my home account.<br />
The boostrap toolchain will reside in the sub-directory <code>"lua"</code> within that <code>"tmp"</code> folder.<br />
<br />
<h3>Get the latest source of the Lua interpreter from their homepage</h3><br />
Open a cmd shell, by right-clicking the Windows icon in the lower-left corner of your screen. Select "Run" from the context-menu and type <code>"cmd"</code>.<br />
You should get a command shell, where the current directory is already your home-directory, but to be sure, navigate to that location and create the sub-directories <code>"tmp\lua"</code> by typing:<br />
<br />
<div class="bash">$> cd %HOMEDRIVE%\%HOMEPATH%<br />
$> mkdir tmp\lua<br />
$> cd tmp\lua</div><br />
To see, where <code>"%HOMEDRIVE%\%HOMEPATH%"</code> is pointing to, type in your cmd-shell:<br />
<br />
<div class="bash">$> echo %HOMEDRIVE%\%HOMEPATH%</div><br />
<div class="code gray-box">C:\Users\cschmidt></div><br />
Now, open your preferred browser and navigate to the <a href="https://www.lua.org/ftp/">Lua FTP download page</a>. Download the latest version of thte source code, and save it in your home-directory under the newly created folder <code>"%HOMEDRIVE%\%HOMEPATH%\tmp\lua"</code>. Im my case the current source package of Lua was "<a href="https://www.lua.org/ftp/lua-5.3.5.tar.gz">lua-5.3.5.tar.gz</a>".<br />
<br />
Now you have to unpack the newly downloaded package <code>"lua-5.3.5.tar.gz"</code> into the <code>"tmp\lua"</code> folder. I am not sure if the standard Window 10 unzip mechanism is able to unpack archives packed by <code>"*.tar.gz"</code> compression, which usually is used on *nix-like systems e.g. Linux. I recommend to install an archive manager like <a href="https://www.7-zip.org">7-Zip</a> anyway.<br />
<br />
Once you have successfully unpacked the <code>"lua-5.3.5.tar.gz"</code> source code package, you should get an additional directory named <code>"lua-5.3.5"</code> within your <code>"tmp\lua"</code> folder. If you just have an intermediate file called <code>"lua-5.3.5.tar"</code>, you have to unpack that file again.<br />
<br />
<div class="bash">$> dir</div><br />
<div class="code gray-box">Directory of C:\Users\cschmidt\tmp\lua<br />
<br />
11.07.2019 14:19 <DIR> .<br />
11.07.2019 14:19 <DIR> ..<br />
26.06.2018 18:21 <DIR> lua-5.3.5<br />
21.06.2019 15:11 303.543 lua-5.3.5.tar.gz<br />
1 File(s) 303.543 bytes<br />
3 Dir(s) 160.112.115.712 bytes free</div><br />
Everything from the source code package will be unpacked into the sub-directory <code>"lua-5.3.5"</code><br />
To see, what we got, type:<br />
<br />
<div class="bash">$> cd lua-5.3.5<br />
$> dir /s</div><br />
<div class="code gray-box">Volume in drive C has no label.<br />
Volume Serial Number is 62D3-5614<br />
<br />
Directory of C:\Users\cschmidt\tmp\lua\lua-5.3.5<br />
<br />
[...]<br />
<br />
Directory of C:\Users\cschmidt\tmp\lua\lua-5.3.5\doc<br />
<br />
[...]<br />
<br />
Directory of C:\Users\cschmidt\tmp\lua\lua-5.3.5\src<br />
<br />
11.07.2019 14:53 <DIR> .<br />
11.07.2019 14:53 <DIR> ..<br />
06.12.2017 20:35 31.352 lapi.c<br />
19.04.2017 19:20 545 lapi.h<br />
19.04.2017 19:20 30.495 lauxlib.c<br />
19.04.2017 19:20 8.632 lauxlib.h<br />
[...]<br />
19.04.2017 19:20 1.305 lualib.h<br />
19.04.2017 19:20 6.179 lundump.c<br />
19.04.2017 19:20 803 lundump.h<br />
19.04.2017 19:29 7.075 lutf8lib.c<br />
19.04.2017 19:39 44.393 lvm.c<br />
19.04.2017 19:20 3.685 lvm.h<br />
19.04.2017 19:20 1.365 lzio.c<br />
19.04.2017 19:20 1.481 lzio.h<br />
25.06.2018 19:46 6.911 Makefile<br />
63 File(s) 694.458 bytes<br />
<br />
Total Files Listed:<br />
75 File(s) 1.088.257 bytes<br />
8 Dir(s) 160.102.793.216 bytes free</div><br />
<h3>Compile the Lua interpreter from the sources</h3><br />
In order to compile the Lua interpreters source code, we need a compiler. I assume that you have already a "Visual Studio 2019" (Community Edition will be sufficient) or similar installed.<br />
<br />
We now need a new command shell with the compiler tool chain set up. The easiest way, to get this "special" shell, is to start up Visual Studio and launch the command shell from the menu:<br />
<strong>"Tools --> Visual Studio Command Prompt"</strong> <br />
<br />
In that shell, navigate again to our source code folder:<br />
<br />
<div class="bash">$> cd %HOMEDRIVE%\%HOMEPATH%\tmp\lua\lua-5.3.5\src</div><br />
To compile the source, type:<br />
<br />
<div class="bash">$> cl /MD /O2 /c /DLUA_BUILD_AS_DLL *.c<br />
$> ren lua.obj lua.o<br />
$> ren luac.obj luac.o<br />
$> link /DLL /IMPLIB:lua5.3.5.lib /OUT:lua5.3.5.dll *.obj<br />
$> link /OUT:lua.exe lua.o lua5.3.5.lib<br />
$> lib /OUT:lua5.3.5-static.lib *.obj<br />
$> link /OUT:luac.exe luac.o lua5.3.5-static.lib</div><br />
Alternatively, you can put all the instructions in a file named <code>"buildLua5.3.5.bat"</code> and just execute that file:<br />
<br />
<div class="code gray-box">cl /MD /O2 /c /DLUA_BUILD_AS_DLL *.c<br />
ren lua.obj lua.o<br />
ren luac.obj luac.o<br />
link /DLL /IMPLIB:lua5.3.5.lib /OUT:lua5.3.5.dll *.obj<br />
link /OUT:lua.exe lua.o lua5.3.5.lib<br />
lib /OUT:lua5.3.5-static.lib *.obj<br />
link /OUT:luac.exe luac.o lua5.3.5-static.lib</div><br />
<div class="bash">$> buildLua5.3.5.bat</div><br />
<div class="code gray-box">C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>cl /MD /O2 /c /DLUA_BUILD_AS_DLL *.c<br />
Microsoft (R) C/C++ Optimizing Compiler Version 19.21.27702.2 for x86<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
lapi.c<br />
lauxlib.c<br />
lbaselib.c<br />
lbitlib.c<br />
[...]<br />
lua.c<br />
luac.c<br />
lundump.c<br />
lutf8lib.c<br />
lvm.c<br />
lzio.c<br />
Generating Code...<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>ren lua.obj lua.o<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>ren luac.obj luac.o<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>link /DLL /IMPLIB:lua5.3.5.lib /OUT:lua5.3.5.dll *.obj<br />
Microsoft (R) Incremental Linker Version 14.21.27702.2<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
Creating library lua5.3.5.lib and object lua5.3.5.exp<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>link /OUT:lua.exe lua.o lua5.3.5.lib<br />
Microsoft (R) Incremental Linker Version 14.21.27702.2<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>lib /OUT:lua5.3.5-static.lib *.obj<br />
Microsoft (R) Library Manager Version 14.21.27702.2<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
<br />
C:\Users\cschmidt\tmp\lua\lua-5.3.5\src>link /OUT:luac.exe luac.o lua5.3.5-static.lib<br />
Microsoft (R) Incremental Linker Version 14.21.27702.2<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
Creating library luac.lib and object luac.exp<br />
</div><br />
Fine, everything is build locally into the current folder "src". If we want to use the result of this compilation, we can just move everything we need to a folder called "lua" in our home-directory:<br />
<br />
<div class="bash">$> mkdir %HOMEDRIVE%\%HOMEPATH%\lua<br />
$> copy *.dll %HOMEDRIVE%\%HOMEPATH%\lua\<br />
$> copy *.h %HOMEDRIVE%\%HOMEPATH%\lua\<br />
$> copy *.hpp %HOMEDRIVE%\%HOMEPATH%\lua\<br />
$> copy *.exp %HOMEDRIVE%\%HOMEPATH%\lua\<br />
$> copy *.lib %HOMEDRIVE%\%HOMEPATH%\lua\<br />
$> copy *.exe %HOMEDRIVE%\%HOMEPATH%\lua\</div><br />
Finally we need to <strong>expand the "PATH" environment variable (in the system-settings) by that location to the Lua interpreter binary</strong>.<br />
<br />
<br />
After doing so, we open a new command shell, as described at the beginning of this article, in order to test if everything is working:<br />
<br />
<div class="bash">$> lua</div><br />
<div class="code gray-box">> Lua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio</div><br />
To exit the interpreter, just type "CTRL-C".<br />
<br />
Fine! Now, we can safely delete the folder <code>"tmp\lua"</code>.<br />
<br />
<h2>References:</h2><ol><li><a href="https://www.lua.org/">Lua homepage</a></li>
<li><a href="https://blog.spreendigital.de/2015/01/16/how-to-compile-lua-5-3-0-for-windows/">How to compile Lua 5.3.0 for Windows</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com1Velburg, Deutschland49.2311863 11.67039769999996749.0653918 11.347674199999966 49.396980799999994 11.993121199999967tag:blogger.com,1999:blog-2525946083367405222.post-29685218010844316532019-06-21T00:10:00.001+02:002019-07-11T15:53:30.877+02:00Linux Mint: Compile the Lua interpreter from source and build your own Debian package to install it<h2>Objective</h2><br />
Compile the latest version of the <a href="https://lua.org/">Lua</a> language interpreter from source. First build a locally installed Lua interpreter for the current user only. Finally, build a Debian package to install it and have the possibility to safely remove it when necessary.<br />
<br />
<h2>Motivation</h2><br />
Recently, I thought it was a good idea to learn a new programming language that can be easily integrated and combined with C++. After a quick research I figured that Lua would be very handy for such a job. Unfortunately, my current Mint version does not quite support the latest Lua version in it's repository. Additionally, I plan to use Lua from Windows 10, too. So, I wanted to have the installation build from the same sources for both Operating Systems as well. This article describes the process to get a local portable installation of Lua on per user basis and a description on how to build a Debian installer package for Linux that can be easily uninstalled again. An upcoming article will describe how to compile your own Lua interpreter on Windows 10 using the (command line interface) CLI compiler from Microsoft e.g. Visual Studio Professional.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 19.1 Tessa - Cinnamon (64 Bit)</li>
<li>GCC 7.4.0 (build essential or <a href="http://itsonlycode.blogspot.com/2015/05/install-multiple-versions-of-gcc-at.html">update-alternatives</a>)</li>
</ul><br />
<h2>Solution</h2><br />
I decided to do the whole build and temporary stuff within a <code>"tmp"</code> folder in my home account.<br />
The boostrap toolchain will reside in the sub-directory <code>"lua"</code> within the <code>"tmp"</code> folder.<br />
<br />
<h3>Get the latest source of the Lua interpreter from their homepage</h3><br />
Open a terminal and change the current directory to "~/tmp". Type the following commands to dowload the latest version of the source code of the Lua interpreter, in my case version 5.3.5.<br />
<br />
<div class="bash">$> mkdir -p ~/tmp/lua<br />
$> cd ~/tmp/lua<br />
$> wget https://www.lua.org/ftp/lua-5.3.5.tar.gz</div><br />
you'll see some output similar to this:<br />
<br />
<div class="code gray-box">--2019-06-19 21:34:32-- https://www.lua.org/ftp/lua-5.3.5.tar.gz<br />
Resolving www.lua.org (www.lua.org)... 88.99.213.221, 2a01:4f8:10a:3edc::2<br />
Connecting to www.lua.org (www.lua.org)|88.99.213.221|:443... connected.<br />
HTTP request sent, awaiting response... 200 OK<br />
Length: 303543 (296K) [application/gzip]<br />
Saving to: ‘lua-5.3.5.tar.gz’<br />
<br />
lua-5.3.5.tar.gz 100%[===============================================================>] 296,43K --.-KB/s in 0,1s <br />
<br />
2019-06-19 21:34:32 (2,56 MB/s) - ‘lua-5.3.5.tar.gz’ saved [303543/303543]</div><br />
Now you have to unpack the downloaded package:<br />
<br />
<div class="bash">$> tar -xzvf lua-5.3.5.tar.gz</div><br />
Everything will be unpacked into the sub-directory <code>"lua-5.3.5"</code><br />
<br />
<div class="code gray-box">lua-5.3.5/<br />
lua-5.3.5/Makefile<br />
lua-5.3.5/doc/<br />
lua-5.3.5/doc/luac.1<br />
lua-5.3.5/doc/manual.html<br />
lua-5.3.5/doc/manual.css<br />
lua-5.3.5/doc/contents.html<br />
[...]<br />
lua-5.3.5/doc/readme.html<br />
lua-5.3.5/src/<br />
lua-5.3.5/src/ldblib.c<br />
lua-5.3.5/src/lmathlib.c<br />
lua-5.3.5/src/loslib.c<br />
lua-5.3.5/src/lvm.c<br />
lua-5.3.5/src/ldo.h<br />
lua-5.3.5/src/lua.h<br />
[...]<br />
lua-5.3.5/src/lua.hpp<br />
[...]<br />
lua-5.3.5/README<br />
[...]<br />
</div><br />
Change into the directory <code>"~/tmp/lua/lua-5.3.5"</code> in order to build the interpreter.<br />
<br />
<div class="bash">$> cd ~/tmp/lua/lua-5.3.5</div><br />
<h3>Compile the Lua interpreter from the sources</h3><br />
In order to check how the interpreter can be build and to get more information, we type "make"<br />
<br />
<div class="bash">$> make</div><br />
<div class="code gray-box">Please do 'make PLATFORM' where PLATFORM is one of these:<br />
aix bsd c89 freebsd generic linux macosx mingw posix solaris<br />
See doc/readme.html for complete instructions.</div><br />
For further explanations, we can check the included documentation in "doc/readme.html".<br />
<br />
Let's compile the sources for the Linux target system.<br />
<br />
<div class="bash">$> make linux</div><br />
<div class="code gray-box">cd src && make linux<br />
make[1]: Entering directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
make all SYSCFLAGS="-DLUA_USE_LINUX" SYSLIBS="-Wl,-E -ldl -lreadline"<br />
make[2]: Entering directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
gcc -std=gnu99 -O2 -Wall -Wextra -DLUA_COMPAT_5_2 -DLUA_USE_LINUX -c -o lapi.o lapi.c<br />
[...]<br />
gcc -std=gnu99 -O2 -Wall -Wextra -DLUA_COMPAT_5_2 -DLUA_USE_LINUX -c -o linit.o linit.c<br />
ar rcu liblua.a lapi.o lcode.o lctype.o ldebug.o ldo.o ldump.o lfunc.o lgc.o llex.o lmem.o lobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o ltm.o lundump.o lvm.o lzio.o lauxlib.o lbaselib.o lbitlib.o lcorolib.o ldblib.o liolib.o lmathlib.o loslib.o lstrlib.o ltablib.o lutf8lib.o loadlib.o linit.o <br />
ar: `u' modifier ignored since `D' is the default (see `U')<br />
ranlib liblua.a<br />
gcc -std=gnu99 -O2 -Wall -Wextra -DLUA_COMPAT_5_2 -DLUA_USE_LINUX -c -o lua.o lua.c<br />
lua.c:82:10: fatal error: readline/readline.h: No such file or directory<br />
#include <readline/readline.h><br />
^~~~~~~~~~~~~~~~~~~~~<br />
compilation terminated.<br />
<builtin>: recipe for target 'lua.o' failed<br />
make[2]: *** [lua.o] Error 1<br />
make[2]: Leaving directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
Makefile:110: recipe for target 'linux' failed<br />
make[1]: *** [linux] Error 2<br />
make[1]: Leaving directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
Makefile:55: recipe for target 'linux' failed<br />
make: *** [linux] Error 2</div><br />
<strong>Ooops, that was unexpected!</strong><br />
<br />
However, a quick online research shows that in order to compile Lua the dependency to the "readline" library has to be met.<br />
Let's quickly resolve that.<br />
<br />
<div class="bash">$> sudo apt-get install libreadline-dev</div><br />
<div class="code gray-box">Reading package lists... Done<br />
Building dependency tree <br />
Reading state information... Done<br />
The following additional packages will be installed:<br />
libtinfo-dev<br />
Suggested packages:<br />
readline-doc<br />
The following NEW packages will be installed:<br />
libreadline-dev libtinfo-dev<br />
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.<br />
Need to get 214 kB of archives.<br />
After this operation, 1.134 kB of additional disk space will be used.<br />
Do you want to continue? [J/n] J<br />
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libtinfo-dev amd64 6.1-1ubuntu1.18.04 [81,3 kB]<br />
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 libreadline-dev amd64 7.0-3 [133 kB]<br />
Fetched 214 kB in 0s (557 kB/s) <br />
Selecting previously unselected package libtinfo-dev:amd64.<br />
(Reading database ... 354931 files and directories currently installed.)<br />
Preparing to unpack .../libtinfo-dev_6.1-1ubuntu1.18.04_amd64.deb ...<br />
Unpacking libtinfo-dev:amd64 (6.1-1ubuntu1.18.04) ...<br />
Selecting previously unselected package libreadline-dev:amd64.<br />
Preparing to unpack .../libreadline-dev_7.0-3_amd64.deb ...<br />
Unpacking libreadline-dev:amd64 (7.0-3) ...<br />
Processing triggers for install-info (6.5.0.dfsg.1-2) ...<br />
Setting up libtinfo-dev:amd64 (6.1-1ubuntu1.18.04) ...<br />
Setting up libreadline-dev:amd64 (7.0-3) ...</div><br />
And try again ...<br />
<br />
<div class="bash">$> make linux</div><br />
<div class="code gray-box">cd src && make linux<br />
make[1]: Entering directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
make all SYSCFLAGS="-DLUA_USE_LINUX" SYSLIBS="-Wl,-E -ldl -lreadline"<br />
make[2]: Entering directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
gcc -std=gnu99 -O2 -Wall -Wextra -DLUA_COMPAT_5_2 -DLUA_USE_LINUX -c -o lua.o lua.c<br />
gcc -std=gnu99 -o lua lua.o liblua.a -lm -Wl,-E -ldl -lreadline <br />
gcc -std=gnu99 -O2 -Wall -Wextra -DLUA_COMPAT_5_2 -DLUA_USE_LINUX -c -o luac.o luac.c<br />
gcc -std=gnu99 -o luac luac.o liblua.a -lm -Wl,-E -ldl -lreadline <br />
make[2]: Leaving directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'<br />
make[1]: Leaving directory '/home/cschmidt/tmp/lua/lua-5.3.5/src'</div><br />
Done.<br />
<br />
In order to install everything system-wide, you can now type<br />
<br />
<div class="bash">$> sudo make linux install</div><br />
<font color="red"><strong>But hold on!</strong></font><br />
Do we really want to pollute our system and install things system-wide without having the possibility to uninstall safely everything later?<br />
<br />
What about just create a local install for the current user ...<br />
<br />
<div class="bash">$> make local</div><br />
<div class="code gray-box">make install INSTALL_TOP=../install<br />
make[1]: Entering directory '/home/cschmidt/tmp/lua/lua-5.3.5'<br />
cd src && mkdir -p ../install/bin ../install/include ../install/lib ../install/man/man1 ../install/share/lua/5.3 ../install/lib/lua/5.3<br />
cd src && install -p -m 0755 lua luac ../install/bin<br />
cd src && install -p -m 0644 lua.h luaconf.h lualib.h lauxlib.h lua.hpp ../install/include<br />
cd src && install -p -m 0644 liblua.a ../install/lib<br />
cd doc && install -p -m 0644 lua.1 luac.1 ../install/man/man1</div><br />
Let's see, what this did:<br />
<br />
<div class="bash">$> ls -la</div><br />
<div class="code gray-box">drwxr-xr-x 5 cschmidt cschmidt 4096 Jun 19 21:43 .<br />
drwxrwxr-x 3 cschmidt cschmidt 4096 Jun 19 21:35 ..<br />
drwxr-xr-x 2 cschmidt cschmidt 4096 Jun 26 2018 doc<br />
drwxrwxr-x 7 cschmidt cschmidt 4096 Jun 19 21:43<font color="green">install</font><br />
-rw-r--r-- 1 cschmidt cschmidt 3273 Dez 20 2016 Makefile<br />
-rw-r--r-- 1 cschmidt cschmidt 151 Jun 26 2018 README<br />
drwxr-xr-x 2 cschmidt cschmidt 4096 Jun 19 21:40 src</div><br />
<div class="bash">$> ls -la install</div><br />
<div class="code gray-box">drwxrwxr-x 7 cschmidt cschmidt 4096 Jun 19 21:43 .<br />
drwxr-xr-x 57 cschmidt users 4096 Jun 20 20:59 ..<br />
drwxrwxr-x 2 cschmidt cschmidt 4096 Jun 19 21:43 bin<br />
drwxrwxr-x 2 cschmidt cschmidt 4096 Jun 19 21:43 include<br />
drwxrwxr-x 3 cschmidt cschmidt 4096 Jun 19 21:43 lib<br />
drwxrwxr-x 3 cschmidt cschmidt 4096 Jun 19 21:43 man<br />
drwxrwxr-x 3 cschmidt cschmidt 4096 Jun 19 21:43 share</div><br />
Fine, everything is build locally into a sub-folder "install". If we want to use this version, we can just move it to our home-account and expand the "PATH" environment variable by the location to the Lua interpreter binary:<br />
<br />
<div class="bash">$> mv install ~/lua<br />
$> set PATH=$PATH:~/lua/bin</div><br />
In order to add the binary path permanently after a new login, we need to edit the "PATH" variable in our ".bashrc" file as we already did in some of the other articles e.g. <a href="https://itsonlycode.blogspot.com/2018/06/linux-mint-compile-and-install-go.html">Linux Mint: Compile and install the Go compiler from source</a>.<br />
<br />
Anyway, this time we want to build a Debian installer package instead.<br />
<br />
<h3>Build a Debian package to install</h3><br />
As we already know from the article about <a href="https://itsonlycode.blogspot.com/2018/06/linux-mint-build-your-own-debian.html">Linux Mint: Linux Mint: Build your own debian package of cmake</a>, we can use "checkinstall":<br />
<br />
<div class="bash">$> sudo checkinstall --install=no</div><br />
<div class="code gray-box">checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran<br />
This software is released under the GNU GPL.<br />
<br />
<br />
<br />
*****************************************<br />
**** Debian package creation selected ***<br />
*****************************************</div><br />
The configuration will look like something similar to<br />
<br />
<div class="code gray-box">This package will be built according to these values:<br />
<br />
0 - Maintainer: [ cschmidt@gimli ]<br />
1 - Summary: [ Lua 5.3.0 private build ]<br />
2 - Name: [ lua ]<br />
3 - Version: [ 5.3.5 ]<br />
4 - Release: [ 1 ]<br />
5 - License: [ GPL ]<br />
6 - Group: [ checkinstall ]<br />
7 - Architecture: [ amd64 ]<br />
8 - Source location: [ lua-5.3.5 ]<br />
9 - Alternate source location: [ ]<br />
10 - Requires: [ ]<br />
11 - Provides: [ lua ]<br />
12 - Conflicts: [ ]<br />
13 - Replaces: [ ]<br />
<br />
Enter a number to change any of them or press ENTER to continue:</div><br />
so we adjust everything in order to our needs (for details see also the article <a href="https://itsonlycode.blogspot.com/2018/06/linux-mint-build-your-own-debian.html">Linux Mint: Linux Mint: Build your own debian package of cmake</a>):<br />
<br />
<div class="code gray-box">This package will be built according to these values: <br />
<br />
0 - Maintainer: [ christianschmidt@hotmail.com ]<br />
1 - Summary: [ Lua 5.3.0 private build ]<br />
2 - Name: [ lua ]<br />
3 - Version: [ 5.3.5 ]<br />
4 - Release: [ 1 ]<br />
5 - License: [ MIT ]<br />
6 - Group: [ checkinstall ]<br />
7 - Architecture: [ amd64 ]<br />
8 - Source location: [ lua-5.3.5 ]<br />
9 - Alternate source location: [ https://www.lua.org/ftp/lua-5.3.5.tar.gz ]<br />
10 - Requires: [ ]<br />
11 - Provides: [ lua ]<br />
12 - Conflicts: [ ]<br />
13 - Replaces: [ ]<br />
<br />
Enter a number to change any of them or press ENTER to continue:</div><br />
Now, we can hit "ENTER" to start the build process.<br />
<br />
<div class="code gray-box">Installing with make install...<br />
<br />
========================= Installation results ===========================<br />
cd src && mkdir -p /usr/local/bin /usr/local/include /usr/local/lib /usr/local/man/man1 /usr/local/share/lua/5.3 /usr/local/lib/lua/5.3<br />
cd src && install -p -m 0755 lua luac /usr/local/bin<br />
cd src && install -p -m 0644 lua.h luaconf.h lualib.h lauxlib.h lua.hpp /usr/local/include<br />
cd src && install -p -m 0644 liblua.a /usr/local/lib<br />
cd doc && install -p -m 0644 lua.1 luac.1 /usr/local/man/man1<br />
<br />
======================== Installation successful ==========================<br />
<br />
Copying documentation directory...<br />
./<br />
./doc/<br />
./doc/lua.css<br />
./doc/contents.html<br />
./doc/manual.css<br />
./doc/luac.1<br />
./doc/index.css<br />
./doc/osi-certified-72x60.png<br />
./doc/readme.html<br />
./doc/lua.1<br />
./doc/logo.gif<br />
./doc/manual.html<br />
./README<br />
<br />
Copying files to the temporary directory...OK<br />
<br />
Stripping ELF binaries and libraries...OK<br />
<br />
Compressing man pages...OK<br />
<br />
Building file list...OK<br />
<br />
Building Debian package...OK<br />
<br />
NOTE: The package will not be installed<br />
<br />
Erasing temporary files...OK<br />
<br />
Deleting temp dir...OK<br />
<br />
<br />
**********************************************************************<br />
<br />
Done. The new package has been saved to<br />
<br />
/home/cschmidt/tmp/lua/lua-5.3.5/lua_5.3.5-1_amd64.deb<br />
You can install it in your system anytime using: <br />
<br />
dpkg -i lua_5.3.5-1_amd64.deb<br />
<br />
**********************************************************************</div><br />
Finished.<br />
<br />
We now have a package "lua_5.3.5-1_amd64.deb" in the folder "~/tm/lua/lua-3.5.3" that we can e.g. double-click to install.<br />
<br />
<h2>References:</h2><ol><li><a href="https://www.lua.org/">Lua homepage</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0Regensburg, Deutschland49.0134297 12.10162360000003948.8468242 11.778900100000039 49.180035200000006 12.424347100000039tag:blogger.com,1999:blog-2525946083367405222.post-43378720487147183822018-06-23T19:40:00.001+02:002018-06-23T19:43:51.643+02:00Linux Mint: Compile and install the Go compiler from source<h2>Objective</h2><br />
Set up bootstrapping. Build the latest version of the <a href="https://golang.org/">Go</a> compiler with C bridge mode support from it's sources.<br />
<br />
<h2>Motivation</h2><br />
By coincidence, I stumbled over this interesting <a href="https://www.tutorialspoint.com/go/go_useful_resources.htm">tutorial</a> about the programming language <a href="https://golang.org/">Go</a>.<br />
Since i was always interested to play around with that language, I took the opportunity, to try it out. As I usually want to use the latest compiler-version, I thought, it would be a good idea, that I do not use the Go-Installer, but compile the sources by myself from scratch. Unfortunately, the latest Go compiler, cannot be compiled with C support, when there is not already a Go compiler installed in the system. Therefore I also had to install an older Go compiler for booststrapping first. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 18.3 Sylvia - Cinnamon (64 Bit)</li>
<li>GCC 7.3.0 (build essential or <a href="http://itsonlycode.blogspot.com/2015/05/install-multiple-versions-of-gcc-at.html">update-alternatives</a>)</li>
</ul><br />
<h2>Solution</h2><br />
I decided to do the whole build and temporary stuff within the <code>"Downloads"</code> folder in my home account.<br />
The boostrap toolchain will reside in the sub-directory <code>"gobootstrap"</code> within the <code>"Downloads"</code> folder.<br />
<br />
<h3>Setup the bootstrap toolchain</h3><br />
Open a terminal. Download and install the latest Go bootstrap toolchain.<br />
<br />
<div class="bash">$> mkdir -p Downloads/goboostrap<br />
$> cd Downloads/gobootstrap<br />
$> wget https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz</div><br />
you'll see some output similar to this:<br />
<br />
<div class="code gray-box">--2018-06-23 17:58:49-- https://dl.google.com/go/go1.4-bootstrap-20171003.tar.gz<br />
Resolving dl.google.com (dl.google.com)... 216.58.207.46, 2a00:1450:4001:824::200e<br />
Connecting to dl.google.com (dl.google.com)|216.58.207.46|:443... connected.<br />
HTTP request sent, awaiting response... 200 OK<br />
Length: 11009739 (10M) [application/octet-stream]<br />
Saving to: ‘go1.4-bootstrap-20171003.tar.gz.1’<br />
<br />
go1.4-bootstrap-20171003.tar.gz.1 100%[==========================================================================>] 10,50M 5,23MB/s in 2,0s <br />
<br />
2018-06-23 17:58:51 (5,23 MB/s) - ‘go1.4-bootstrap-20171003.tar.gz.1’ saved [11009739/11009739]</div><br />
Now you have to unpack the downloaded toolchain package:<br />
<br />
<div class="bash">$> tar -xvzf go1.4-bootstrap-20171003.tar.gz</div><br />
The whole stuff is unpacked into a new sub-directory <code>"go"</code><br />
<br />
<div class="code gray-box">go/.gitattributes<br />
go/.gitignore<br />
[...]<br />
go/src/cmd/5g/gg.h<br />
go/src/cmd/5g/ggen.c<br />
go/src/cmd/5g/gobj.c<br />
go/src/cmd/5g/gsubr.c<br />
[...]<br />
go/test/varerr.go<br />
go/test/varinit.go<br />
go/test/zerodivide.go</div><br />
Change directory into the <code>"./go/src"</code> and build the bootstrap toolchain.<br />
<strong>Observe: </strong>This step requires you to have already a functional GCC compiler present on your system.<br />
If not already done: To set-up the GCC 7.3.0 on your system see <a href="http://itsonlycode.blogspot.com/2015/05/install-multiple-versions-of-gcc-at.html">Install multiple versions of GCC on your system</a><br />
<br />
<div class="bash">$> CGO_ENABLED=0 ./make.bash</div><br />
<div class="code gray-box"># Building C bootstrap tool.<br />
cmd/dist<br />
<br />
# Building compilers and Go bootstrap tool for host, linux/amd64.<br />
lib9<br />
[...]<br />
# Building packages and commands for linux/amd64.<br />
runtime<br />
errors<br />
sync/atomic<br />
[...]<br />
cmd/pprof<br />
net/rpc<br />
net/http/fcgi<br />
net/rpc/jsonrpc<br />
</div><br />
Finally, the toolchain build is finished.<br />
<br />
<h3>Compile the Go compiler</h3><br />
Before you compile the compiler, step back to the <code>"Downloads"</code> folder and download the latest source of the go compiler.<br />
<br />
<div class="bash">$> cd ~/Downloads<br />
$> wget https://dl.google.com/go/go1.10.3.src.tar.gz</div><br />
<div class="code gray-box">--2018-06-23 18:29:13-- https://dl.google.com/go/go1.10.3.src.tar.gz<br />
Resolving dl.google.com (dl.google.com)... 216.58.207.78, 2a00:1450:4001:825::200e<br />
Connecting to dl.google.com (dl.google.com)|216.58.207.78|:443... connected.<br />
HTTP request sent, awaiting response... 200 OK<br />
Length: 18323736 (17M) [application/octet-stream]<br />
Saving to: ‘go1.10.3.src.tar.gz’<br />
<br />
go1.10.3.src.tar.gz 100%[==========================================================================>] 17,47M 5,94MB/s in 2,9s <br />
<br />
2018-06-23 18:29:16 (5,94 MB/s) - ‘go1.10.3.src.tar.gz’ saved [18323736/18323736]</div><br />
Now, unpack the sources, like it was already done, with the toolchain package.<br />
<br />
<div class="bash">$> tar -xvzf go1.10.3.src.tar.gz</div><br />
<div class="code gray-box">go/<br />
go/AUTHORS<br />
go/CONTRIBUTING.md<br />
[...]<br />
go/src/runtime/closure_test.go<br />
go/src/runtime/compiler.go<br />
go/src/runtime/complex.go<br />
[...]<br />
go/test/varinit.go<br />
go/test/writebarrier.go<br />
go/test/zerodivide.go<br />
</div><br />
Again, step into the <code>"src"</code> directory and build the compiler, using the bootstrap toolchain.<br />
This step may take a while, depending on the performance of your computer.<br />
<br />
<div class="bash">$> cd go/src<br />
$> GOROOT_BOOTSTRAP=~/Downloads/gobootstrap/go ./all.bash</div><br />
<div class="code gray-box">Building Go cmd/dist using /home/cschmidt/Downloads/gobootstrap/go.<br />
Building Go toolchain1 using /home/cschmidt/Downloads/gobootstrap/go.<br />
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.<br />
Building Go toolchain2 using go_bootstrap and Go toolchain1.<br />
Building Go toolchain3 using go_bootstrap and Go toolchain2.<br />
Building packages and commands for linux/amd64.<br />
<br />
##### Testing packages.<br />
ok archive/tar 0.051s<br />
ok archive/zip 1.164s<br />
ok bufio 0.186s<br />
ok bytes 0.686s<br />
ok compress/bzip2 0.132s<br />
[...]<br />
ok cmd/vendor/golang.org/x/arch/x86/x86asm 0.213s<br />
ok cmd/vet 3.946s<br />
ok cmd/vet/internal/cfg 0.033s<br />
<br />
##### GOMAXPROCS=2 runtime -cpu=1,2,4 -quick<br />
ok runtime 14.067s<br />
<br />
##### cmd/go terminal test<br />
PASS<br />
ok _/home/cschmidt/Downloads/go/src/cmd/go/testdata/testterminal18153 0.001s<br />
<br />
##### Testing without libgcc.<br />
ok crypto/x509 1.016s<br />
ok net 0.031s<br />
ok os/user 0.038s<br />
<br />
[...]<br />
<br />
##### API check<br />
Go version is "go1.10.3", ignoring -next /home/cschmidt/Downloads/go/api/next.txt<br />
<br />
ALL TESTS PASSED<br />
---<br />
Installed Go for linux/amd64 in /home/cschmidt/Downloads/go<br />
Installed commands in /home/cschmidt/Downloads/go/bin<br />
*** You need to add /home/cschmidt/Downloads/go/bin to your PATH.<br />
</div><br />
As I didn't want to have the Go compiler installed in my <code>"Downloads"</code> folder I simply move it directly into my home-account.<br />
<br />
<div class="bash">$> cd ~/Downloads<br />
$> mv go ~/</div><br />
Let's try to call the Go compiler<br />
<br />
<div class="bash">$> go</div><br />
<div class="code gray-box">The program 'go' is currently not installed. You can install it by typing:<br />
sudo apt install golang-go</div><br />
Ouwww! What went wrong? -- <strong>Nothing!</strong> <br />
<br />
I forgot to extend the <code>PATH</code> variable of my environment as mentioned by the hint, given after compilation.<br />
<br />
To do so, I add the following line to my "~/.bashrc".<br />
<br />
<div class="bash">$> echo 'PATH=$PATH:$HOME/go/bin # Add go compiler' >> ~/.bashrc</div><br />
<strong>Observe: </strong>Double-check, you use single quotes instead of double quotes here, otherwise, the <code>bash</code> will already expand the <code>"$PATH"</code> variable here and append it's content to your <code>"~/.bashrc"</code>. <br />
<br />
Once, again:<br />
<div class="bash">$> go</div><br />
<div class="code gray-box">Go is a tool for managing Go source code.<br />
<br />
Usage:<br />
<br />
go command [arguments]<br />
[...]<br />
</div><br />
Yeah, finally done.<br />
Just to clean up the mess within the <code>"Downloads"</code> directory by just deleting everything, I do not need anymore.<br />
<br />
<h2>References:</h2><ol><li><a href="https://www.tutorialspoint.com/go/go_environment.htm">Go - Environment Setup</a></li>
<li><a href="https://golang.org/doc/install/source">Installing Go from source</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com1Regensburg, Deutschland49.0134297 12.10162360000003948.8468242 11.778900100000039 49.180035200000006 12.424347100000039tag:blogger.com,1999:blog-2525946083367405222.post-5276008645584338852018-06-20T00:30:00.004+02:002018-06-20T00:30:58.383+02:00Linux Mint: Build your own debian package of cmake<h2>Objective</h2><br />
I wanted to use the newest available version of <strong>CMake (version 3.12.0-rc1)</strong> on <strong>Linux Mint 18.3 Sylvia</strong>.<br />
<br />
<h2>Motivation</h2><br />
My company started using CMake as a Meta-Build-System in combination with Visual Studio 2017 in a brand new software project. Because of this fact, I had the opportunity, to attend a <a href="https://www.eclipseina.com/index.php/Seminar-CMake-deutsch.html">Modern CMake seminar</a> at <a href="https://www.eclipseina.com">Eclipseina GmbH</a>, covering most features of Modern CMake.<br />
As Visual Studio 2017 comes with a CMake-component of version 3.10.0 already, I wanted at least to be able to use the same version of CMake on my Linux Mint 18.3 at home.<br />
Modern CMake requires at least CMake version 3.x. <br />
<br />
Unfortunately, the repository of Linux Mint 18.3 only supports a Debian package installer for CMake 3.5.1. The homepage of CMake at <a href="https://cmake.org/">cmake.org</a> only offers an install script, without uninstaller. I don't want to pollute my system with early access versions of software packages that I cannot clearly uninstall later.<br />
In contrary, I wanted to be able to install and uninstall any version of CMake. Therefore I needed to build my own Debian install package (*.deb) for CMake.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 18.3 Sylvia - Cinnamon (64 Bit)</li>
<li>GCC 7.3.0 (build essential or <a href="http://itsonlycode.blogspot.com/2015/05/install-multiple-versions-of-gcc-at.html">update-alternatives</a>)</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Install CMake locally</h3><br />
Open a terminal. Download and install the CMake installer script:<br />
<br />
<div class="bash">$> mkdir Downloads<br />
$> cd Downloads<br />
$> wget https://cmake.org/files/v3.12/cmake-3.12.0-rc1-Linux-x86_64.sh</div><br />
you'll see some output like this:<br />
<br />
<div class="code gray-box">--2018-06-19 22:58:04-- https://cmake.org/files/v3.12/cmake-3.12.0-rc1-Linux-x86_64.sh<br />
Resolving cmake.org (cmake.org)... 66.194.253.19<br />
Connecting to cmake.org (cmake.org)|66.194.253.19|:443... connected.<br />
HTTP request sent, awaiting response... 200 OK<br />
Length: 30260259 (29M) [text/x-sh]<br />
Saving to: ‘cmake-3.12.0-rc1-Linux-x86_64.sh.1’<br />
<br />
cmake-3.12.0-rc1-Linux-x86_64.sh 100%[==========================================================>] 28,86M 6,02MB/s in 5,3s <br />
<br />
2018-06-19 22:58:10 (5,41 MB/s) - ‘cmake-3.12.0-rc1-Linux-x86_64.sh.1’ saved [30260259/30260259]</div><br />
Now, set the executable flag for the downloaded script and start the temporary local install as normal user<br />
<br />
<div class="bash">$> chmod u+x cmake-3.12.0-rc1-Linux-x86_64.sh<br />
$> ./cmake-3.12.0-rc1-Linux-x86_64.sh</div><br />
<div class="code gray-box">CMake Installer Version: 3.12.0-rc1, Copyright (c) Kitware<br />
This is a self-extracting archive.<br />
The archive will be extracted to: /home/cschmidt/Downloads<br />
<br />
If you want to stop extracting, please press <ctrl-c>.<br />
CMake - Cross Platform Makefile Generator<br />
Copyright 2000-2018 Kitware, Inc. and Contributors<br />
All rights reserved.<br />
<br />
[...]<br />
Do you accept the license? [yN]:<br />
</ctrl-c></div><br />
Accept the license, by typing <code>'y'</code>.<br />
<br />
<div class="code gray-box">By default the CMake will be installed in:<br />
"/home/cschmidt/Downloads/cmake-3.12.0-rc1-Linux-x86_64"<br />
Do you want to include the subdirectory cmake-3.12.0-rc1-Linux-x86_64?<br />
Saying no will install in: "/home/cschmidt/Downloads" [Yn]: </div><br />
Accept the default path, by typing <code>'Y'</code>.<br />
<br />
<div class="code gray-box">Using target directory: /home/cschmidt/Downloads/cmake-3.12.0-rc1-Linux-x86_64<br />
Extracting, please wait...<br />
<br />
Unpacking finished successfully</div><br />
To be able to use the locally installed CMake, you must add it's binary directory to your environment path:<br />
(Of course, you must use the path from above that reflects your install directory and add a <code>"/bin"</code> path-component here)<br />
<br />
<div class="bash">$> PATH=$PATH:/home/cschmidt/Downloads/cmake-3.12.0-rc1-Linux-x86_64/bin</div><br />
<h3>Download and extract the CMake source package</h3><br />
<div class="bash">$> wget https://cmake.org/files/v3.12/cmake-3.12.0-rc1.tar.gz</div><br />
<div class="code gray-box">--2018-06-19 23:15:38-- https://cmake.org/files/v3.12/cmake-3.12.0-rc1.tar.gz<br />
Resolving cmake.org (cmake.org)... 66.194.253.19<br />
Connecting to cmake.org (cmake.org)|66.194.253.19|:443... connected.<br />
HTTP request sent, awaiting response... 200 OK<br />
Length: 8089283 (7,7M) [application/x-gzip]<br />
Saving to: ‘cmake-3.12.0-rc1.tar.gz.1’<br />
<br />
cmake-3.12.0-rc1.tar.gz.1 100%[==========================================================>] 7,71M 3,46MB/s in 2,2s <br />
<br />
2018-06-19 23:15:42 (3,46 MB/s) - ‘cmake-3.12.0-rc1.tar.gz.1’ saved [8089283/8089283]</div><br />
Extract the source tar-gz package:<br />
<br />
<div class="bash">$> tar -xvzf cmake-3.12.0-rc1.tar.gz</div><div class="code gray-box">cmake-3.12.0-rc1/.clang-format<br />
cmake-3.12.0-rc1/.clang-tidy<br />
cmake-3.12.0-rc1/Auxiliary/<br />
cmake-3.12.0-rc1/Auxiliary/bash-completion/<br />
cmake-3.12.0-rc1/Auxiliary/bash-completion/cmake<br />
cmake-3.12.0-rc1/Auxiliary/bash-completion/CMakeLists.txt<br />
[...]<br />
cmake-3.12.0-rc1/Utilities/Sphinx/static/cmake-favicon.ico<br />
cmake-3.12.0-rc1/Utilities/Sphinx/static/cmake-logo-16.png<br />
cmake-3.12.0-rc1/Utilities/Sphinx/static/cmake.css<br />
cmake-3.12.0-rc1/Utilities/Sphinx/templates/<br />
cmake-3.12.0-rc1/Utilities/Sphinx/templates/layout.html</div><br />
<h3>Compile CMake from source using your temporary CMake installation</h3><br />
<div class="bash">$> cd cmake-3.12.0-rc1<br />
$> mkdir build<br />
$> cd build/<br />
$> cmake ..</div><br />
CMake does some checks of your system and builds the binaries from source, which takes a while ...<br />
<br />
<div class="code gray-box">-- The C compiler identification is GNU 7.3.0<br />
-- The CXX compiler identification is GNU 7.3.0<br />
-- Check for working C compiler: /usr/bin/cc<br />
-- Check for working C compiler: /usr/bin/cc -- works<br />
[...]<br />
-- Performing Test run_inlines_hidden_test<br />
-- Performing Test run_inlines_hidden_test - Success<br />
-- Configuring done<br />
-- Generating done<br />
-- Build files have been written to: /home/cschmidt/Downloads/cmake-3.12.0-rc1/build<br />
cschmidt@gimli:~/Downloads/cmake-3.12.0-rc1/build$ make<br />
Scanning dependencies of target cmsys_c<br />
[ 0%] Building C object Source/kwsys/CMakeFiles/cmsys_c.dir/ProcessUNIX.c.o<br />
[ 0%] Building C object Source/kwsys/CMakeFiles/cmsys_c.dir/Base64.c.o<br />
[...]<br />
[ 1%] Building C object Source/kwsys/CMakeFiles/cmsys_c.dir/String.c.o<br />
[ 1%] Linking C static library libcmsys_c.a<br />
[...]<br />
Scanning dependencies of target foo<br />
[100%] Building CXX object Tests/FindPackageModeMakefileTest/CMakeFiles/foo.dir/foo.cpp.o<br />
[100%] Linking CXX static library libfoo.a<br />
[100%] Built target foo</div><br />
<h3>Build the Debian package (*.deb)</h3><br />
If <code>checkinstall</code> is not installed on your machine, you can install it via:<br />
<br />
<div class="bash">$> sudo apt-get install checkinstall</div><br />
On mine, it's already available, therefore...<br />
<br />
<div class="code gray-box">Reading package lists... Done<br />
Building dependency tree <br />
Reading state information... Done<br />
checkinstall is already the newest version (1.6.2-4ubuntu1).<br />
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.</div><br />
Normally <code>checkinstall</code> needs to be run as <code>root</code> and does not only build the package, but also install the software.<br />
To just build the package, without <code>root</code> privileges and without automatically installing it, we have to run <code>checkinstall</code> using <code>fakeroot</code>.<br />
<br />
<div class="bash">$> fakeroot checkinstall --install=no --fstrans=yes</div><br />
<div class="code gray-box">checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran<br />
This software is released under the GNU GPL.<br />
<br />
<br />
The package documentation directory ./doc-pak does not exist. <br />
Should I create a default set of package docs? [y]: y</div><br />
Confirm the question with <code>'y'</code>.<br />
<br />
<div class="code gray-box">Preparing package documentation...OK<br />
<br />
*** No known documentation files were found. The new package <br />
*** won't include a documentation directory.<br />
<br />
*****************************************<br />
**** Debian package creation selected ***<br />
*****************************************<br />
<br />
This package will be built according to these values: <br />
<br />
0 - Maintainer: [ cschmidt@gimli ]<br />
1 - Summary: [ CMake Release Candidate (3.12.0-rc1) ]<br />
2 - Name: [ build ]<br />
3 - Version: [ 20180618 ]<br />
4 - Release: [ 1 ]<br />
5 - License: [ GPL ]<br />
6 - Group: [ checkinstall ]<br />
7 - Architecture: [ amd64 ]<br />
8 - Source location: [ build ]<br />
9 - Alternate source location: [ ]<br />
10 - Requires: [ ]<br />
11 - Provides: [ build ]<br />
12 - Conflicts: [ ]<br />
13 - Replaces: [ ]</div><br />
Now you have the opportunity to change some meta-data, e.g. name and URL: <br />
<br />
<div class="bash">Enter a number to change any of them or press ENTER to continue: 0<br />
Enter the maintainer's name and e-mail address: <br />
>> cwschmidt<br />
<br />
Enter a number to change any of them or press ENTER to continue: 9<br />
Enter the alternate source location: <br />
>> https://cmake.org/files/v3.12/cmake-3.12.0-rc1.tar.gz</div><br />
<div class="code gray-box">This package will be built according to these values: <br />
<br />
0 - Maintainer: [ cwschmidt ]<br />
1 - Summary: [ CMake Release Candidate (3.12.0-rc1) ]<br />
2 - Name: [ build ]<br />
3 - Version: [ 20180618 ]<br />
4 - Release: [ 1 ]<br />
5 - License: [ GPL ]<br />
6 - Group: [ checkinstall ]<br />
7 - Architecture: [ amd64 ]<br />
8 - Source location: [ build ]<br />
9 - Alternate source location: [ https://cmake.org/files/v3.12/cmake-3.12.0-rc1.tar.gz ]<br />
10 - Requires: [ ]<br />
11 - Provides: [ build ]<br />
12 - Conflicts: [ ]<br />
13 - Replaces: [ ]<br />
<br />
Enter a number to change any of them or press ENTER to continue:</div><br />
Finally, press Enter to continue<br />
<br />
<div class="code gray-box">Installing with make install...<br />
<br />
========================= Installation results ===========================<br />
[ 1%] Built target cmsys_c<br />
[ 2%] Built target cmsysTestsC<br />
[ 4%] Built target cmsys<br />
[...]<br />
[100%] Built target pseudo_tidy<br />
[100%] Built target pseudo_cppcheck<br />
[100%] Built target foo<br />
Install the project...<br />
-- Install configuration: ""<br />
-- Installing: /usr/local/doc/cmake-3.12/Copyright.txt<br />
-- Installing: /usr/local/share/cmake-3.12/Help<br />
-- Installing: /usr/local/share/cmake-3.12/Help/prop_dir<br />
-- Installing: /usr/local/share/cmake-3.12/Help/prop_dir/VS_GLOBAL_SECTION_PRE_section.rst<br />
[...]<br />
-- Installing: /usr/local/share/cmake-3.12/Modules<br />
-- Installing: /usr/local/share/cmake-3.12/Modules/FindCurses.cmake<br />
-- Installing: /usr/local/share/cmake-3.12/Modules/FindWget.cmake<br />
-- Installing: /usr/local/share/cmake-3.12/Modules/FindAVIFile.cmake<br />
[...]<br />
-- Installing: /usr/local/bin/cmake<br />
-- Installing: /usr/local/bin/ctest<br />
-- Installing: /usr/local/bin/cpack<br />
[...]<br />
-- Installing: /usr/local/share/cmake-3.12/completions/cmake<br />
-- Installing: /usr/local/share/cmake-3.12/completions/cpack<br />
-- Installing: /usr/local/share/cmake-3.12/completions/ctest<br />
<br />
======================== Installation successful ==========================<br />
<br />
Some of the files created by the installation are inside the home directory: /home<br />
<br />
You probably don't want them to be included in the package.<br />
Do you want me to list them? [n]: y<br />
Should I exclude them from the package? (Saying yes is a good idea) [n]: y</div><br />
You were ask, to exclude the files that were placed in your home-directory.<br />
To inspect the list, answer the first question with <code>'y'</code>.<br />
Answer the second question to exclude the files in the home-directory with <code>'y'</code>.<br />
<br />
<div class="code gray-box">Copying files to the temporary directory...OK<br />
<br />
Stripping ELF binaries and libraries...OK<br />
<br />
Compressing man pages...OK<br />
<br />
Building file list...OK<br />
<br />
Building Debian package...OK<br />
<br />
NOTE: The package will not be installed<br />
<br />
Erasing temporary files...OK<br />
<br />
Writing backup package...OK<br />
OK<br />
<br />
Deleting temp dir...OK<br />
<br />
<br />
**********************************************************************<br />
<br />
Done. The new package has been saved to<br />
<br />
/home/cschmidt/Downloads/cmake-3.12.0-rc1/build/build_20180618-1_amd64.deb<br />
You can install it in your system anytime using: <br />
<br />
dpkg -i build_20180618-1_amd64.deb<br />
<br />
**********************************************************************</div><br />
Finished. You can install the newly created package, by typing<br />
<br />
<div class="bash">$> sudo dpkg -i build_20180618-1_amd64.deb</div><br />
or with your debian package manager with a double-click on the file "build_20180618-1_amd64.deb". <br />
<br />
<h2>References:</h2><ol><li><a href="https://cmake.org/">cmake.org</a></li>
<li><a href="https://askubuntu.com/questions/355565/how-do-i-install-the-latest-version-of-cmake-from-the-command-line">How do I install the latest version of cmake from the command line?</a></li>
<li><a href="https://stackoverflow.com/questions/47052218/checkinstall-source-code-inside-home-directory">checkinstall source code inside home directory<br />
</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0Regensburg, Deutschland49.0134297 12.10162360000003948.8468242 11.778900100000039 49.180035200000006 12.424347100000039tag:blogger.com,1999:blog-2525946083367405222.post-56635255647668168092017-07-28T23:24:00.001+02:002017-07-29T16:09:26.462+02:00Linux Mint: Mount your iPhone like an external drive to transfer photos and videos<h2>Objective</h2><br />
I want to mount my <strong>"iPhone 5s"</strong> like any external disk-drive on my <strong>Linux Mint 18.2 "Sonya"</strong> to access my photos and videos. The "out-of-box" solution stopped working since my upgrade to iOS 10.3.<br />
<br />
<h2>Motivation</h2><br />
Since iOS 8, I was used to install <code>libimobiledevice</code> with my package-manager (usually <code>synaptic</code> or <code>"apt-get install libimobiledevice"</code> from the Mint repository, to have access to my "iPhone 5s".<br />
Until now, this was a very convenient way, to exchange photos and videos between my iPhone and a my Laptop with Linux Mint. Recently I updated my Linux installation to Linux Mint 18.2 "Sonya" and my iPhone to iOS 10.3.3. After that, I recognized that <code>libimobiledevice</code> didn't work reliable anymore.<br />
First, I couldn't really find out, whether the newer version of Linux Mint or the newer version of iOS was in charge for the decline of service. After a while reading posts on the subject on the internet, I really suspect, that the main reason was the upgrade to iOS 10.2 and later 10.3. In iOS 10.2 I already, only sporadically, could connect my phone, but mostly just to see the "Documents" folder mounted, but not the "Photo" folder. Rarely the "Photo" folder appeared, too. If I was lucky and it was mounted, I wasn't asked whether I will trust the connection to the computer by my phone.<br />
However, without the confirmation of this question (which didn't even appear) the "Photo" folder always was displayed as empty. Bummer!<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 18.2 Sonya</li>
<li>iPhone 5s with iOS 10.3.3</li>
</ul><br />
<h2>Solution: Compiling most of the sources yourself</h2><br />
On the internet, I found a manual, that promised to make the connection between an "iPhone" and Linux Mint work again [1]. This manual was written originally for users of Ubuntu in the first place. Still with Linux Mint, it did mostly work as described, but was kind of incomplete.<br />
<br />
After you follow this description, the tools, to mount and unmount your phone, will be installed in the home account for the current user, just only one library <code>usbmuxd</code> must be installed as root in the system, otherwise mounting would not work.<br />
<br />
<h3>Install necessary software for building the source packages</h3><br />
To check-out and compile the needed packages from source, you have to install some additional software first.<br />
<br />
Therefore, open a <code>bash-command-shell</code> and install <code>git</code> to be able to check-out the source code repository to be compiled.<br />
<br />
<div class="bash">$> sudo apt-get install -y git</div><br />
Then install the compiler suite via the meta-package <code>build-essentials</code> including <code>gcc</code> and such...<br />
<br />
<div class="bash">$> sudo apt-get install -y build-essential</div><br />
In contrary to the original manual at [1], I had to install some additional build tools. <br />
<br />
<div class="bash">$> sudo apt-get install -y libtool m4 automake</div><br />
I also needed the package <code>libfuse-dev</code> from the Mint repository, this seems maybe not to be necessary on Ubuntu.<br />
<br />
<div class="bash">$> sudo apt-get install -y libfuse-dev</div><br />
<h3>Setup the shell environment to build the software</h3><br />
If you don't want to install the new commands directly into your system (which also would additionally need <code>sudo</code> for <strong>all</strong> <code>"make install"</code> commands, which is not recommended), you have to setup your shell environment.<br />
<br />
As for this tutorial, all new commands to mount and unmount the file-system of your iPhone, will be installed in the sub-directory <code>"${HOME}/usr/bin/"</code>. <br />
<br />
Create the sub-directory to store the source files of the packages to be compiled:<br />
<br />
<div class="bash">$> mkdir -p "$HOME/usr/src"</div><br />
Set all required environment variables to ensure to build the packages from source as desired:<br />
<br />
<div class="bash">$> export PKG_CONFIG_PATH="${HOME}/usr/lib/pkgconfig:${PKG_CONFIG_PATH}"<br />
$> export CPATH="${HOME}/usr/include:${CPATH}"<br />
$> export MANPATH="${HOME}/usr/share/man:${MANPATH}"<br />
$> export PATH="${HOME}/usr/bin:${PATH}"<br />
$> export LD_LIBRARY_PATH="${HOME}/usr/lib:${LD_LIBRARY_PATH}"</div><br />
<h3>Make the path to your new tools permanent</h3><br />
It is recommended, to put the last two export statements into your <code>.bashrc</code>, to be loaded every time you open a new command shell, otherwise you must type<br />
<br />
<div class="bash">$> export PATH="${HOME}/usr/bin:${PATH}"<br />
$> export LD_LIBRARY_PATH="${HOME}/usr/lib:${LD_LIBRARY_PATH}</div><br />
in every newly opened command-shell to mount and unmount the file-system of your iPhone, because the first export is needed to find the new commands and the second is needed to load the correct run-time for the commands.<br />
<br />
<h3>Clone all needed repositories from Github</h3><br />
<div class="bash">$> cd ~/usr/src<br />
$> for x in libplist libusbmuxd usbmuxd libimobiledevice ifuse; do git clone https://github.com/libimobiledevice/${x}.git;done</div><br />
You should see something similar to the following output:<br />
<br />
<div class="code gray-box">Cloning into 'libplist'...<br />
remote: Counting objects: 3767, done.<br />
remote: Total 3767 (delta 0), reused 0 (delta 0), pack-reused 3767<br />
Receiving objects: 100% (3767/3767), 1.13 MiB | 727.00 KiB/s, done.<br />
Resolving deltas: 100% (2304/2304), done.<br />
Checking connectivity... done.<br />
Cloning into 'libusbmuxd'...<br />
remote: Counting objects: 382, done.<br />
remote: Total 382 (delta 0), reused 0 (delta 0), pack-reused 382<br />
Receiving objects: 100% (382/382), 123.94 KiB | 0 bytes/s, done.<br />
Resolving deltas: 100% (209/209), done.<br />
Checking connectivity... done.<br />
Cloning into 'usbmuxd'...<br />
remote: Counting objects: 1954, done.<br />
remote: Compressing objects: 100% (5/5), done.<br />
remote: Total 1954 (delta 0), reused 1 (delta 0), pack-reused 1949<br />
Receiving objects: 100% (1954/1954), 604.44 KiB | 424.00 KiB/s, done.<br />
Resolving deltas: 100% (1191/1191), done.<br />
Checking connectivity... done.<br />
Cloning into 'libimobiledevice'...<br />
remote: Counting objects: 8095, done.<br />
remote: Total 8095 (delta 0), reused 0 (delta 0), pack-reused 8095<br />
Receiving objects: 100% (8095/8095), 2.47 MiB | 617.00 KiB/s, done.<br />
Resolving deltas: 100% (5666/5666), done.<br />
Checking connectivity... done.<br />
Cloning into 'ifuse'...<br />
remote: Counting objects: 499, done.<br />
remote: Total 499 (delta 0), reused 0 (delta 0), pack-reused 499<br />
Receiving objects: 100% (499/499), 92.37 KiB | 0 bytes/s, done.<br />
Resolving deltas: 100% (242/242), done.<br />
Checking connectivity... done.<br />
</div><br />
Additionally to the original manual [1], I also had to compile <code>libplist</code> from source.<br />
<br />
<h3>Build and install the packages in the following order</h3><br />
<h4>Build libplist</h4><br />
<div class="bash">$> cd ~/usr/src/libplist<br />
$> ./autogen.sh --prefix="$HOME/usr"<br />
$> make && make install</div><br />
<h4>Build libusbmuxd</h4><br />
<div class="bash">$> cd ~/usr/src/libusbmuxd<br />
$> ./autogen.sh --prefix="$HOME/usr"<br />
$> make && make install</div><br />
<h4>Build libimobiledevice</h4><br />
<div class="bash">$> cd ~/usr/src/libimobiledevice<br />
$> ./autogen.sh --prefix="$HOME/usr"<br />
$> make && make install</div><br />
<h4>Build usbmuxd</h4><br />
The package <code>usbmuxd</code> must be installed with administrative rights, because it needs write access to <code>"/lib/udev/rules.d"</code> and <code>"/lib/systemd/system"</code>.<br />
<br />
<div class="bash">$> cd ~/usr/src/usbmuxd<br />
$> ./autogen.sh --prefix="$HOME/usr"<br />
$> make && <strong>sudo</strong> make install</div><br />
<h4>Build ifuse</h4><br />
<div class="bash">$> cd ~/usr/src/ifuse<br />
$> ./autogen.sh --prefix="$HOME/usr"<br />
$> make && make install</div><br />
<h2>Test if everything works</h2><br />
It's assumed that you put the two exports into your <code>~/.bashrc</code> as mentioned above.<br />
Open a new bash command-shell.<br />
<br />
<h3>Connect your iPhone</h3><br />
Create a mount point, where you want the content of your iPhone to appear.<br />
<br />
<div class="bash">$> mkdir -p ~/usr/mnt</div><br />
Check which command executable will used, just in case you also have <code>libimobiledevice</code> additionally installed from the Mint repository, to avoid confusion.<br />
<br />
<div class="bash">$> type -p ifuse</div><br />
<div class="code gray-box">/home/csch/usr/bin/ifuse</div><br />
<div class="bash">$> type -p idevicepair</div><br />
<div class="code gray-box">/home/csch/usr/bin/idevicepair</div><br />
<h3>Pair your iPhone with your computer<br />
</h3><br />
Now, grab your lightning-usb-cable and connect your iPhone to the computer.<br />
Try to pair the iPhone with your computer.<br />
<br />
<div class="bash">$> idevicepair pair</div><br />
<div class="code gray-box">ERROR: Could not validate with device 45ad6a77ae03f2d03f14a68fae178e45e70e7a04 because a passcode is set. Please enter the passcode on the device and retry.</div><br />
Ooops, what's that? What happened? Again...<br />
<br />
<div class="bash">$> idevicepair pair</div><br />
No, worry ... the ERROR just tell you that you forgot to confirm that you trust the connected computer on your phone, by entering your PIN on your phone and accept the trustworthy question.<br />
<br />
<div class="code gray-box">cschmidt@pippin:~/usr/src/ifuse$ idevicepair pair<br />
ERROR: Please accept the trust dialog on the screen of device 45ad6a77ae03f2d03f14a68fae178e45e70e7a04, then attempt to pair again.</div><br />
After doing so, you finally can mount your iPhone (All good things come by in threes, therefore again)<br />
<br />
<div class="bash">$> idevicepair pair</div><br />
<div class="code gray-box">SUCCESS: Paired with device 45ad6a77ae03f2d03f14a68fae178e45e70e7a04</div><br />
<h3>Mount the file-system of your iPhone and check the content<br />
</h3><br />
Finally, mount the file-system of your phone.<br />
<br />
<div class="bash">$> ifuse ~/usr/mnt/<br />
$> ls ~/usr/mnt/</div><br />
<div class="code gray-box">AirFair com.apple.itunes.lock_sync iTunes_Control Photos Radio<br />
Books DCIM MediaAnalysis PublicStaging Recordings<br />
CloudAssets Downloads PhotoData Purchases Safari<br />
</div><br />
<h3>Unmount and disconnect<br />
</h3><br />
To safely disconnect your iPhone, you have to unmount the file-system in <code>~/usr/mnt</code> first with <code>fusermount</code>.<br />
<br />
<div class="bash">$> fusermount -u ~/usr/mnt</div><br />
Now, you can plug-off your iPhone again.<br />
<br />
<h2>References:</h2><ol><li><a href="https://gist.github.com/samrocketman/70dff6ebb18004fc37dc5e33c259a0fc">gist: samrocketman/libimobiledevice_ifuse_Ubuntu.md</a></li>
<li><a href="https://github.com/libimobiledevice">Github repository <code>https://github.com/libimobiledevice/</code></a></li>
<li><a href="https://bash.cyberciti.biz/guide/Type_command"><code>type</code> command reference</a></li>
</ol><br />
<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com21Regensburg, Deutschland49.0134297 12.10162360000003948.8468242 11.778900100000039 49.180035200000006 12.424347100000039tag:blogger.com,1999:blog-2525946083367405222.post-45195644905323089252016-12-04T00:26:00.001+01:002017-07-28T23:30:13.770+02:00 Blogger: Host "SyntaxHighlighter" on GitHub-Pages<h2>Objective</h2><br />
I would like to host my "<a href="http://alexgorbatchev.com/SyntaxHighlighter/">SyntaxHighlighter</a>" on Github-Pages to make it easy to format my source-code on my blog on Blogger. This is needed, because the original hosting service of Google-Drive is not working anymore..<br />
<br />
<h2>Motivation</h2><br />
In a comment, a reader of my blog made me aware, that the syntax highlighting for source-code stopped working a while ago. So I investigated into the issue and found out that unfortunately Google stopped web-hosting of pages via Google-Drive. <br />
<br />
Google deprecated web-hosting support in Google-Drive as of 31. August 2015 (Reference: https://gsuiteupdates.googleblog.com/2015/08/deprecating-web-hosting-support-in.html).<br />
However, the web-hosting via Google-Drive stopped working only a year later as of 31. August 2016.<br />
<br />
I did neither want to move on to <a href="https://domains.google/">Google Domains</a> nor to the <a href="https://cloud.google.com/storage/">Google Cloud Platform</a>,because both services are not free, so I decided to give <a href="https://pages.github.com/">Github-Pages</a> a try.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Github Account</li>
<li>git command-line tool</li>
<li><a href="http://itsonlycode.blogspot.de/2015/06/blogger-setup-syntaxhighlighter-for.html">Blogger: Setup "SyntaxHighlighter" for your blog</a></li>
</ul><br />
<h2>Setup your Hosting of SyntaxHighlighter on Github-Pages</h2><br />
<h3>Prepare your local repository</h3><br />
Follow the instructions in <a href="https://www.blogger.com/null"></a> to get the sources for SyntaxHighlighter.<br />
<br />
Create a directory for your new repository named e.g. <code>syntaxhighlighter-pages/docs</code>:<br />
<div class="bash">$> mkdir -p syntaxhighlighter-pages/docs<br />
$> cd syntaxhighlighter-pages<br />
$> git init</div><br />
<div class="code gray-box">Initialized empty Git repository in /home/cschmidt/syntaxhighlighter-pages/.git/</div><br />
Assuming your <code>SyntaxHighlighter</code> files are at the some location as your <code>syntaxhighlighter-pages</code> folder, copy all the source files your need to be hosted into your docs folder:<br />
<br />
<div class="bash">$> cd docs<br />
$> cp -r ../../syntaxhighlighter_3.0.83/scripts .<br />
$> cp -r ../../syntaxhighlighter_3.0.83/styles .<br />
$> git add *<br />
$> git commit -m "hosted syntaxhighlighter files"</div><br />
<div class="code gray-box">[master (root-commit) 3556ca1] hosted syntaxhighlighter files<br />
45 files changed, 5483 insertions(+)<br />
create mode 100644 docs/scripts/shAutoloader.js<br />
create mode 100644 docs/scripts/shBrushAS3.js<br />
create mode 100644 docs/scripts/shBrushAppleScript.js<br />
create mode 100644 docs/scripts/shBrushBash.js<br />
...<br />
create mode 100644 docs/styles/shThemeEmacs.css<br />
create mode 100644 docs/styles/shThemeFadeToGrey.css<br />
create mode 100755 docs/styles/shThemeMDUltra.css<br />
create mode 100644 docs/styles/shThemeMidnight.css<br />
create mode 100644 docs/styles/shThemeRDark.css</div><br />
Finally push your local repository to your github:<br />
<br />
<div class="bash">$> cd ..<br />
$> git remote add origin https://github.com/cwschmidt/syntaxhighlighter-pages.git<br />
$> git push -u origin master</div><br />
<div class="code gray-box">Counting objects: 50, done.<br />
Delta compression using up to 2 threads.<br />
Compressing objects: 100% (49/49), done.<br />
Writing objects: 100% (50/50), 47.03 KiB | 0 bytes/s, done.<br />
Total 50 (delta 24), reused 0 (delta 0)<br />
remote: Resolving deltas: 100% (24/24), done.<br />
To https://github.com/cwschmidt/syntaxhighlighter-pages_.git<br />
* [new branch] master -> master<br />
Branch master set up to track remote branch master from origin.</div><br />
<br />
<h3>Prepare your github repository</h3><br />
Now log into your github account and create a new repository named "<code>syntaxhighlighter-pages</code>". Herefore click on the "+" on the upper-right corner of the Github webpage.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheVSdKQGMpG14JUBXg4B06fmDgTmIS6zkx_T5eIZYCWeHtInijlTmWpzlr0YXSuBTRAirkVYUIiOzYVIJefJoDUyBzTMR747UQQLUMZ04D1rOerju6WS_cYoIW6yMKInyUrTcIXzcpUMM/s1600/MenuCreateewRepo.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheVSdKQGMpG14JUBXg4B06fmDgTmIS6zkx_T5eIZYCWeHtInijlTmWpzlr0YXSuBTRAirkVYUIiOzYVIJefJoDUyBzTMR747UQQLUMZ04D1rOerju6WS_cYoIW6yMKInyUrTcIXzcpUMM/s1600/MenuCreateewRepo.png" /></a><br />
<br />
Fill in the name of the repository and make it "public" as shown below (didn't test, whether publiching pages also works with "private" ones):<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhImNfVYjBukSc4w9qWtPT3C_CB3O0uM_OI0tgj1Qi9Rrppw24TiftU6je9xCkK28UahUB-Ig_56ZLR87nhz7r7cpS8Y4roL68kIq5JDw8rrgic4knyIRDsB_saPiy-KW3duUXHS8yDzo0/s1600/CreateNewRepo.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhImNfVYjBukSc4w9qWtPT3C_CB3O0uM_OI0tgj1Qi9Rrppw24TiftU6je9xCkK28UahUB-Ig_56ZLR87nhz7r7cpS8Y4roL68kIq5JDw8rrgic4knyIRDsB_saPiy-KW3duUXHS8yDzo0/s320/CreateNewRepo.png" /></a><br />
<br />
Select the actual created repository and go to the settings tab shown below.<br />
Scroll down until you reach the section "Github Pages". From the combo-box where "None" is selected right now, select "master branch /docs folder".<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBWXnVU3Ug1K6KGqeLBZbZ60_LutQ_ClYHyD2cxjZ1P1JsDHxdsNXCfJ9NqURRDwXore3N16G6ZEjveyjKD2gMirjnzzscWuGzWh_8S7A9GDkc8JZr9m5fMsu4ZaA_dMYlJLhy1UVqeds/s1600/SettingsTabRepo.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBWXnVU3Ug1K6KGqeLBZbZ60_LutQ_ClYHyD2cxjZ1P1JsDHxdsNXCfJ9NqURRDwXore3N16G6ZEjveyjKD2gMirjnzzscWuGzWh_8S7A9GDkc8JZr9m5fMsu4ZaA_dMYlJLhy1UVqeds/s320/SettingsTabRepo.png" /></a><br />
<br />
Click "Save" and after some seconds, your pages are successfully published:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2HSAZ5q2Bk16UaGmmxSLJ9RBbhMwOaZ_ICd6vOORBlqM2AXELtW42oVJxElVJbbE3M_nQozDaeZA6djayTT05qVcqWHvi2a9nPJzBUEus2oj3XylQ6FUvH-1JFLScqDk8aX4Jso7l1ek/s1600/SettingsTabRepo2.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2HSAZ5q2Bk16UaGmmxSLJ9RBbhMwOaZ_ICd6vOORBlqM2AXELtW42oVJxElVJbbE3M_nQozDaeZA6djayTT05qVcqWHvi2a9nPJzBUEus2oj3XylQ6FUvH-1JFLScqDk8aX4Jso7l1ek/s320/SettingsTabRepo2.png" /></a><br />
<br />
<br />
<h3>Prepare your template to support code formatting</h3><br />
Go to your Blogger's blog online editor and choose "Template" from the menu at the left.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjNVSqW3QLCu_Tz1v-7Hk8k_FDNnxGLVnrezojI_ygTYOz1R1sYk8_8N4M53YywBxbtN6NaSyjXAEeSh43qk_C9o8tz3zFzag0g11Lu1NBJ9fguZDTXWaKuBJSupqz6XCQUJfJDX0rx04/s1600/Unbenannt5.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjNVSqW3QLCu_Tz1v-7Hk8k_FDNnxGLVnrezojI_ygTYOz1R1sYk8_8N4M53YywBxbtN6NaSyjXAEeSh43qk_C9o8tz3zFzag0g11Lu1NBJ9fguZDTXWaKuBJSupqz6XCQUJfJDX0rx04/s320/Unbenannt5.png" /></a><br />
<br />
<br />
Click on "Edit HTML".<br />
<br />
Within the code search for the closing head-tag<br />
<br />
<div style="background: #f0f0f0; border: #cccccc 1px dashed; color: black; font-size: 12px; height: auto; overflow: auto; padding: 5px; width: 95%;"><pre></b:template-skin>
<b:include data='blog' name='google-analytics'/>
<b></head></b>
<body expr:class='&quot;loading&quot; + data:blog.mobileClass'>
<b:section class='navbar' id='navbar' maxwidgets='1' name='Navbar' showaddelement='no'>
</pre></div><br />
And copy the following code (only the links to the files you prepared for hosting) right before the end head-tag<br />
<br />
<div style="background: #f0f0f0; border: #cccccc 1px dashed; color: black; font-size: 12px; height: auto; overflow: auto; padding: 5px; width: 95%;"><pre><!-- Begin SyntaxHighlighter-->
<link href='https://<username>.github.io/syntaxhighlighter-pages/styles/shCore.css' rel='stylesheet' type='text/css'/>
<link href='https://<username>.github.io/syntaxhighlighter-pages/styles/shThemeDefault.css' rel='stylesheet' type='text/css'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shCore.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushCpp.js' type='text/javascript'/>
<!--script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushCpp.js' type='text/javascript'/-->
<!--script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushCSharp.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushCss.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/shBrushJava.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushJScript.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushPhp.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushPython.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushRuby.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushSql.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushVb.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushXml.js' type='text/javascript'/>
<script src='https://<username>.github.io/syntaxhighlighter-pages/scripts/shBrushPerl.js' type='text/javascript'/-->
<script type='text/javascript'>
window.setTimeout(function() {
SyntaxHighlighter.config.bloggerMode = true;
SyntaxHighlighter.all();
}, 20);
</script>
<!-- End SyntaxHighlighter-->
</head>
</pre></div><br />
Exchange the "<b><username></b>" within the URL with <b>your own username</b> on Github.<br />
<br />
<br />
<h3 id="TestIt">Test it</h3><br />
For Testing go back to my blog post <a href="http://itsonlycode.blogspot.de/2015/06/blogger-setup-syntaxhighlighter-for.html#TestIt">Blogger: Setup "SyntaxHighlighter" for your blog - Test it</a><br />
<br />
<h2>References:</h2><ol><li><a href="http://alexgorbatchev.com/SyntaxHighlighter/">http://alexgorbatchev.com/SyntaxHighlighter/</a></li>
<li><a href="https://pages.github.com/">Github-Pages</a></li>
<li><a href="http://itsonlycode.blogspot.de/2015/06/blogger-setup-syntaxhighlighter-for.html">Blogger: Setup "SyntaxHighlighter" for your blog</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com7tag:blogger.com,1999:blog-2525946083367405222.post-81350336595069752242016-08-12T23:34:00.002+02:002016-08-12T23:34:55.665+02:00MacOS X: Problem with accessing SMB-Shares on your Synology with "El Capitan"<h2>Objective</h2><br />
I wanted to access my smb shares on my Synology from my MacBook Air that I recently updated to "El Capitan".<br />
<br />
<h2>Motivation</h2><br />
As I my Synology DS209+II as my central data store where I never had problem so share files via the smb protocol with my MacBook Air as a client, I recently recognized that I can't establish a connection to the shares on my Synology after updating my MacBook Air to "El Capitan".<br />
<h2><br />
Prerequisites</h2><br />
<ul><li>DS209+II</li>
<li>MacBook Air (or any other Mac) with "El Capitan"</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Explanation</h3><br />
Apparently Apple change it's security policy regarding smb-shares in "El Capitan". Those changes can lead to significant speed reduction with smb connection and even prevent you to mount a share at all. Ususally a login via the Finder menu "Go to server" is not successful whereas in the previous version of MacOS X "Mavericks" there where no problem and nothing has changed meanwhile on the server.<br />
<br />
<h3>Solution</h3><br />
You can quickly fix the problem without downgrading to MacOS 10.11.4. <br />
<br />
Open a Terminal an execute the following command:<br />
<br />
<div class="bash">sudo sh -c 'echo "[default]\nsigning_required=no" > /etc/nsmb.conf'<br />
</div><br />
Now, restart your MacBook and you should be able to mount a shared drive from you Synology as ususal.<br />
<br />
<h2>References:</h2><ol><li><a href="http://www.heise.de/mac-and-i/meldung/OS-X-10-11-5-Abhilfe-fuer-SMB-Probleme-3222725.html">http://www.heise.de/mac-and-i/meldung/OS-X-10-11-5-Abhilfe-fuer-SMB-Probleme-3222725.html (German)</a></li>
</ol><br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-71143439595851235772016-04-17T23:13:00.002+02:002016-04-17T23:27:26.499+02:00Manage your local scripts via git on a shared directory of your NAS<h2>Objective</h2><br />
I want to store all my scripts in a central place on my Synology NAS. I want to be able to easily add or alter any script on any computer within may local network. I want to be able to synchronize all my computer to have always the most recent version of any of my scripts installed. I don't want to use a public repository like <a href="http://www.github.com">GitHub</a> or <a href="http://www.bitbucket.com">BitBucket</a> for privacy reasons.<br />
<br />
<h2>Motivation</h2><br />
At the moment, I use several computers to develop software. Those include a MacBookAir, a Linux workstation with several Virtual-Machines on it and a Laptop. On all those machines, virtual or physical I have a separate homeaccount with a <code>bin</code> folder where my scripts for daily work are located. I have a Synology NAS in my network where I backup all those scripts manually in a so called reference folder. I also synchronize my scripts manually, which is time consuming and kind a painful, because sometimes I even don't remember which of my local machines has the most recent version of a script stored at a certain point of time. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.3 Rose</li>
<li>Synology DS209+II</li>
<li>git v1.9.1</li>
<li>nfs-shared directory on NAS</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Create a new bare git repository</h3><br />
On my client machine (laptop with Linux Mint) I changed to the mounted directory (<a href="http://itsonlycode.blogspot.de/2015/02/linux-mint-setup-autofs-to-mount.html">automount</a>) from my NAS wher I want to create the bare git repository:<br />
<br />
<div class="bash">$> cd /mnt/DiskStation/data/home/cschmidt/</div><br />
At the moment I have my reference directory containing all my scripts already in the <code>bin</code> folder there.<br />
<br />
<div class="bash">$> ls -la</div><br />
<div class="code gray-box">drwxr-xr-x 3 cschmidt users 4096 Apr 16 22:54 .<br />
drwxr-xr-x 3 cschmidt users 4096 Apr 16 22:54 ..<br />
drwxrwxrwx 2 cschmidt users 4096 Apr 16 22:54 bin</div><br />
Now I create the bare git repository named <code>bin.git</code><br />
<br />
<div class="bash">$> git init --bare bin.git</div><br />
<div class="code gray-box">Initialized empty Git repository in /mnt/DiskStation/data/home/cschmidt/bin.git/</div><br />
<h3>Create a non-bare git repository in the folder where the reference scripts are actually stored.</h3><br />
Now I change back to the folder where my scripts are currently stored<br />
<br />
<div class="bash">$> cd bin<br />
$> ls -la</div><br />
<div class="code gray-box">drwxrwxrwx 2 cschmidt users 4096 Apr 16 22:54 .<br />
drwxr-xr-x 4 cschmidt users 4096 Apr 17 2016 ..<br />
-rwxrwxrwx 1 cschmidt users 82 Jul 20 2008 listfoldersize.sh<br />
-rwxr-xr-x 1 cschmidt users 2193 Apr 16 18:20 mkscript.sh<br />
-rwxr--r-- 1 cschmidt users 886 Sep 5 2015 renfiles.rb<br />
-rwxrwxrwx 1 cschmidt users 671 Dez 3 2009 synapticsOnOff.sh<br />
-rwxr--r-- 1 cschmidt users 1626 Jul 12 2015 wav2mp3.sh</div><br />
Here I create a non-bare repository<br />
<br />
<div class="bash">$> git init</div><br />
<div class="code gray-box">Initialized empty Git repository in /mnt/DiskStation/data/home/cschmidt/bin/.git/</div><br />
Now I add all the scripts I already have, to the working copy and commit them all.<br />
<br />
<div class="bash">$> git add .<br />
$> git status</div><br />
<div class="code gray-box">On branch master<br />
<br />
Initial commit<br />
<br />
Changes to be committed:<br />
(use "git rm --cached <file>..." to unstage)<br />
<br />
new file: listfoldersize.sh<br />
new file: mkscript.sh<br />
new file: renfiles.rb<br />
new file: synapticsOnOff.sh<br />
new file: wav2mp3.sh</div><br />
<div class="bash">$> git commit -m "initial bunch of scripts"</div><br />
<div class="code gray-box">[master (root-commit) 17879d2] initial bunch of scripts<br />
5 files changed, 805 insertions(+)<br />
create mode 100755 listfoldersize.sh<br />
create mode 100755 mkscript.sh<br />
create mode 100755 renfiles.rb<br />
create mode 100755 synapticsOnOff.sh<br />
create mode 100755 wav2mp3.sh</div><br />
<h3>Push the commited scripts into the bare repository<br />
</h3><br />
So, now I tried to push the committed scripts into my bare host repository<br />
<br />
<div class="bash">$> git push</div><br />
<div class="code gray-box">fatal: No configured push destination.<br />
Either specify the URL from the command-line or configure a remote repository using<br />
<br />
git remote add <name> <url><br />
<br />
and then push using the remote name<br />
<br />
git push <name></div><br />
Ok, I admit ... I forgot to add the remote repository to push into, so do so now<br />
<br />
<div class="bash">$> git remote add origin /mnt/DiskStation/data/home/cschmidt/bin.git</div><br />
and again<br />
<br />
<div class="bash">$> git push</div><br />
oops ...<br />
<br />
<br />
<div class="code gray-box">fatal: The current branch master has no upstream branch.<br />
To push the current branch and set the remote as upstream, use<br />
<br />
git push --set-upstream origin master</div><br />
Ok, I see I could name the repository every time I want to push something into or I'll add it as a <code>upstream</code>.<br />
Let's do the latter ...<br />
<br />
<div class="bash">$> git push --set-upstream origin master</div><br />
<div class="code gray-box">Counting objects: 7, done.<br />
Delta compression using up to 2 threads.<br />
Compressing objects: 100% (7/7), done.<br />
Writing objects: 100% (7/7), 32.23 KiB | 0 bytes/s, done.<br />
Total 7 (delta 2), reused 0 (delta 0)<br />
To /mnt/DiskStation/data/home/cschmidt/bin.git<br />
* [new branch] master -> master<br />
Branch master set up to track remote branch master from origin.</div><br />
<h3>Synchronize your script repository with your client<br />
</h3><br />
So, I want to have my recently commited scripts synchronized to my client<br />
<br />
I go to my home account on my laptop and clone the remote repository into my <code>bin</code>.<br />
(If you have already a <code>bin</code> folder in your home account, delete it first.)<br />
<br />
<div class="bash">$> cd ~<br />
$> git clone /mnt/DiskStation/data/home/cschmidt/bin.git bin</div><br />
<div class="code gray-box">Cloning into 'bin'...<br />
done.</div><br />
<div class="bash">$> cd bin<br />
$> ls -la</div><br />
<div class="code gray-box">drwxrwxrwx 2 cschmidt users 4096 Apr 16 22:54 .<br />
drwxr-xr-x 4 cschmidt users 4096 Apr 17 2016 ..<br />
-rwxrwxrwx 1 cschmidt users 82 Jul 20 2008 listfoldersize.sh<br />
-rwxr-xr-x 1 cschmidt users 2193 Apr 16 18:20 mkscript.sh<br />
-rwxr--r-- 1 cschmidt users 886 Sep 5 2015 renfiles.rb<br />
-rwxrwxrwx 1 cschmidt users 671 Dez 3 2009 synapticsOnOff.sh<br />
-rwxr--r-- 1 cschmidt users 1626 Jul 12 2015 wav2mp3.sh</div><br />
Yeah, everything worked fine!Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-44740123431999872892015-06-19T19:36:00.000+02:002016-12-04T00:30:26.086+01:00Blogger: Setup "SyntaxHighlighter" for your blog<h2>Objective</h2><br />
Install "<a href="http://alexgorbatchev.com/SyntaxHighlighter/">SyntaxHighlighter</a>" on Google-Drive to make it easy to format your source-code on your blog on Blogger. This is needed, because the original hosting service of "SyntaxHighlighter" only supports <code>"http"</code> which Google does not like anymore for security reasons. Installing the needed files on Google-Drive will solve the problem and will serve the needed Javascript and CSS files via <code>"https"</code>.<br />
<br />
<h2>Motivation</h2><br />
This time I wanted to write a new blog article containing C++ code. I was not very keen of manually highlighting the code-snippets like I did in my first blog article about <a href="http://itsonlycode.blogspot.com/2013/08/how-to-emulate-java-synchronized.html">implementing a Java "synchronized" keyword in C++</a>. Therefore I searched the web for a Syntax-Highlighter. I found many online solutions where you can copy&paste your code into a textarea to get html-code for your source-code that can be copied into your blogs source-text. Unfortunatelly this is not very practical nor flexible if you change or edit of your code while writing the blog article or even later. I needed a solution where you just copy&paste your raw-source-code into your blogs text. Ok, maybe you need to assign some specific style, but that's ok. In the end, I found "SyntaxHighlighter" a Javascript and CSS framework written by Alex Gorbatchev. I tried it in my Test-Blog and unfortunatelly noticed that the default hosting that Mr. Gorbatchev provided on his homepage is only <code>"http"</code> and not <code>"https"</code>. Since a few weeks ago, links with only <code>"http"</code> do not work anymore with Blogger and Google-Chrome. Therefore I had to find an easy solution, to host the "SyntaxHighlighter" scripts myself via <code>"https"</code>. I found out, that I very neat solution herefore is, to copy the needed files to your Google-Drive and make them publically accessible. This was the solution of my choice, because I already use the blogging service that Google provides. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Google Account</li>
<li>SyntaxHighlighter (<a href="http://alexgorbatchev.com/SyntaxHighlighter/">http://alexgorbatchev.com/SyntaxHighlighter/</a>)</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Get SyntaxHighlighter</h3><br />
Download SyntaxHighlighter from <a href="http://alexgorbatchev.com/SyntaxHighlighter/download/">http://alexgorbatchev.com/SyntaxHighlighter/download/</a> and unzip it (in my case version 3.0.83):<br />
<br />
<div class="bash">$> unzip syntaxhighlighter_3.0.83.zip</div><br />
<br />
<div class="code gray-box">Archive: syntaxhighlighter_3.0.83.zip<br />
creating: syntaxhighlighter_3.0.83/<br />
...<br />
inflating: syntaxhighlighter_3.0.83/index.html <br />
inflating: syntaxhighlighter_3.0.83/LGPL-LICENSE <br />
inflating: syntaxhighlighter_3.0.83/MIT-LICENSE <br />
creating: syntaxhighlighter_3.0.83/scripts/<br />
inflating: syntaxhighlighter_3.0.83/scripts/shAutoloader.js <br />
...<br />
inflating: syntaxhighlighter_3.0.83/scripts/shBrushCpp.js <br />
...<br />
inflating: syntaxhighlighter_3.0.83/styles/shThemeRDark.css <br />
creating: syntaxhighlighter_3.0.83/tests/<br />
...</div><br />
<div class="deprecated"><b>DEPRECATED SECTION : Setup your Hosting of SyntaxHighlighter on Google-Drive</b><br />
<br />
The following section is deprecated. Due to the fact that Google deprecated web-hosting support in Google-Drive as of 31. August 2015 (Reference: https://gsuiteupdates.googleblog.com/2015/08/deprecating-web-hosting-support-in.html).<br />
However, the web-hosting via Google-Drive stopped working only a year later as of 31. August 2016.<br />
<br />
An alterative to host your javascript files is using <a href="https://pages.github.com/">github-pages</a>.<br />
So <a href="#TestIt">skip</a> this section or jump directly to <a href="https://itsonlycode.blogspot.de/2016/12/blogger-host-syntaxhighlighter-on.html">Blogger: Host "SyntaxHighlighter" on GitHub-Pages</a></div><br />
<h3>Setup your Hosting of SyntaxHighlighter on Google-Drive</h3><br />
<b>1. Create your folder</b><br />
<br />
Log into your <a href="https://drive.google.com/drive/my-drive">Google Drive</a> account. Create a folder, by clicking "New" and then "Folder". Choose where you want to store your SyntaxHighlighter files. I named the root folder "Blog" and even created a subfolder named "SyntaxHighlighter" to create kind of a directory hierarchy in case I want also to host other packages in future.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWzBhr1PQemEIXFo6-W3bnDdIeRg4m_gK0w9sxWkyejar9XggTGpdmv-zjlVlt1OOw1VLtNK93gEQ9YHAQIMu9W2yvG9WWKM1HwB455h_9D6Jgf0ZpqwSeG9cp7u3XZc9ngpdurSKEwRE/s1600/Bildschirmfoto+vom+2015-06-13+21%253A30%253A47.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWzBhr1PQemEIXFo6-W3bnDdIeRg4m_gK0w9sxWkyejar9XggTGpdmv-zjlVlt1OOw1VLtNK93gEQ9YHAQIMu9W2yvG9WWKM1HwB455h_9D6Jgf0ZpqwSeG9cp7u3XZc9ngpdurSKEwRE/s320/Bildschirmfoto+vom+2015-06-13+21%253A30%253A47.png"></a><br />
<br />
Within the folder "SyntaxHighlighter" create 2 subfolders "<b>scripts</b>" and "<b>styles</b>".<br />
<br />
<b>2. Share your folder</b><br />
<br />
Select the folder and then click the Share button.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQxdJpoher-RsNNogqUAF5lf4JkyLw4Tk0TB54XMI4Kbl6mpGyYk9Q5t8Kp97tfwNmgHct35zDmPrCPbjEzZ5rtamlPckrfd1KF-Oy7MlXK_4neP7CgjWABfMNX1-w5aVYtY0cvhBUsE/s1600/Unbenannt1.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQxdJpoher-RsNNogqUAF5lf4JkyLw4Tk0TB54XMI4Kbl6mpGyYk9Q5t8Kp97tfwNmgHct35zDmPrCPbjEzZ5rtamlPckrfd1KF-Oy7MlXK_4neP7CgjWABfMNX1-w5aVYtY0cvhBUsE/s320/Unbenannt1.png"></a><br />
<br />
Click on advanced and choose "change..."<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4-sUY3vAdFeT12i5WpRBgv0tREcFdRZELwLL1rmtILRGq19AOLSmmN-Gg6V9Cc8PCzcBcAGTyqd8vsJHdFdbVewSKCLPGen1YOy9p5U_xGj_SjK3Uo9qQsqg0tcYhTNhyDVc3T4hG_YA/s1600/Unbenannt2.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4-sUY3vAdFeT12i5WpRBgv0tREcFdRZELwLL1rmtILRGq19AOLSmmN-Gg6V9Cc8PCzcBcAGTyqd8vsJHdFdbVewSKCLPGen1YOy9p5U_xGj_SjK3Uo9qQsqg0tcYhTNhyDVc3T4hG_YA/s320/Unbenannt2.png"></a><br />
<br />
then choose "On - Public on the web" in the next dialog<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYmhpH8of0U0S1dzfSKld5kXHQI92VoSt6YQYei5bNI0tESfZ60dJ5b9u0LjOFxGoyYUH0Flj3tfMcZF-bx_UiD9mxAAYnfs-rzyqgBerloA5uPyYTthj9hG8HgzfmJPi3pfnKdk3yV5M/s1600/Unbenannt3.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYmhpH8of0U0S1dzfSKld5kXHQI92VoSt6YQYei5bNI0tESfZ60dJ5b9u0LjOFxGoyYUH0Flj3tfMcZF-bx_UiD9mxAAYnfs-rzyqgBerloA5uPyYTthj9hG8HgzfmJPi3pfnKdk3yV5M/s320/Unbenannt3.png"></a><br />
<br />
and click "Save".<br />
<br />
<b>3. Upload the necessary files from SyntaxHighlighter</b><br />
<br />
Navigate back into your "<b>Blog/SyntaxHighligter/scripts</b>" folder and choose "File upload".<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQMRNWK91oCz8uIlI-_cv-NkzCl5atiGUBxwiUO_c7arMtZqktagUZilKud6wsc1gexLd_pWBCQ71Koyky_QoaCB6QCZoSYIVh3klER5arXweHRfBMXk07u3BXm4Kl_C-T5ZG3BIkC8Pg/s1600/Unbenannt4.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQMRNWK91oCz8uIlI-_cv-NkzCl5atiGUBxwiUO_c7arMtZqktagUZilKud6wsc1gexLd_pWBCQ71Koyky_QoaCB6QCZoSYIVh3klER5arXweHRfBMXk07u3BXm4Kl_C-T5ZG3BIkC8Pg/s400/Unbenannt4.png"></a><br />
<br />
Now, navigate to the local folder where you unzipped the source of <b>syntaxhighlighter_3.0.83.zip</b> and go into the subfolder "<b>scripts</b>". Here you have to select at least <b>"shCore.js"</b> and minimum a <b>"shBrushXX.js"</b> file. The <b>"shBrushXX.js"</b> files specify which languages will be highlighted later on on your blog. I only have choosen "<b>shBrushCpp.js</b>" for the moment. <br />
<br />
Navigate into "<b>styles</b>" folder and choose "File upload" again. Upload at least "<b>shCore.css</b>" and "<b>shThemeDefault.css</b>" from the "<b>styles</b>" folder of your local Hightlighter sources. <br />
<br />
<h3>Prepare your template to support code formatting</h3><br />
Go to your Blogger's blog online editor and choose "Template" from the menu at the left.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjNVSqW3QLCu_Tz1v-7Hk8k_FDNnxGLVnrezojI_ygTYOz1R1sYk8_8N4M53YywBxbtN6NaSyjXAEeSh43qk_C9o8tz3zFzag0g11Lu1NBJ9fguZDTXWaKuBJSupqz6XCQUJfJDX0rx04/s1600/Unbenannt5.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjNVSqW3QLCu_Tz1v-7Hk8k_FDNnxGLVnrezojI_ygTYOz1R1sYk8_8N4M53YywBxbtN6NaSyjXAEeSh43qk_C9o8tz3zFzag0g11Lu1NBJ9fguZDTXWaKuBJSupqz6XCQUJfJDX0rx04/s320/Unbenannt5.png"></a><br />
<br />
<br />
Click on "Edit HTML".<br />
<br />
Within the code search for the closing head-tag<br />
<br />
<div style="BORDER: #cccccc 1px dashed; PADDING: 5px; WIDTH: 95%; BACKGROUND: #f0f0f0; COLOR: #000000; FONT-SIZE: 12px; OVERFLOW: auto; height:auto"><pre></b:template-skin>
<b:include data='blog' name='google-analytics'/>
<b></head></b>
<body expr:class='&quot;loading&quot; + data:blog.mobileClass'>
<b:section class='navbar' id='navbar' maxwidgets='1' name='Navbar' showaddelement='no'>
</pre></div><br />
And copy the following code (only the links to the files you prepared for hosting) right before the end head-tag<br />
<br />
<div style="BORDER: #cccccc 1px dashed; PADDING: 5px; WIDTH: 95%; BACKGROUND: #f0f0f0; COLOR: #000000; FONT-SIZE: 12px; OVERFLOW: auto; height:auto"><pre><!-- Begin SyntaxHighlighter-->
<link href='https://googledrive.com/host/xxxxx/styles/shCore.css' rel='stylesheet' type='text/css'/>
<link href='https://googledrive.com/host/xxxxx/styles/shThemeDefault.css' rel='stylesheet' type='text/css'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shCore.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushCpp.js' type='text/javascript'/>
<!--script src='https://googledrive.com/host/xxxxx/scripts/shBrushCpp.js' type='text/javascript'/-->
<!--script src='https://googledrive.com/host/xxxxx/scripts/shBrushCSharp.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushCss.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushJava.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushJScript.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushPhp.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushPython.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushRuby.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushSql.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushVb.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushXml.js' type='text/javascript'/>
<script src='https://googledrive.com/host/xxxxx/scripts/shBrushPerl.js' type='text/javascript'/-->
<script type='text/javascript'>
window.setTimeout(function() {
SyntaxHighlighter.config.bloggerMode = true;
SyntaxHighlighter.all();
}, 20);
</script>
<!-- End SyntaxHighlighter-->
</head>
</pre></div><br />
The strange "xxxxx" is the placeholder for your very own share id. You can find your share-id, when you choose again the folder "SyntaxHighlighter" on your Google-Drive. Exchange all "xxxxx" in the code above by your share-id, e.g. "ZsTjl4Sjl0YmQ4SmNTh6WavThstZbGR0Mm1ydQAfm9oWlhkQlNRJSm9wQ3Z0Bxn0jvqYuNlU".<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrpms7iflP3ZSzQ_fGFwWfr03v2kAC75GKc69gMY7w5Jnximy-q7B7qOlCzsjFsJwJnu5Pvg9DGvixTsTvOqx6ob_uA_2DVEMhfb0eQiNdVrnmB2qQMGtpkuESsBr1_tHJftzEUVVbXk/s1600/Unbenannt6.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrpms7iflP3ZSzQ_fGFwWfr03v2kAC75GKc69gMY7w5Jnximy-q7B7qOlCzsjFsJwJnu5Pvg9DGvixTsTvOqx6ob_uA_2DVEMhfb0eQiNdVrnmB2qQMGtpkuESsBr1_tHJftzEUVVbXk/s400/Unbenannt6.png"></a><br />
<br />
<h3 id="TestIt">Test it</h3><br />
Let's put some source-code into a blog article to check if it shows up correctly. There are 2 possibilities how to do this: <br />
<br />
<b>Method 1:</b><br />
<br />
<div style="BORDER: #cccccc 1px dashed; PADDING: 5px; WIDTH: 95%; BACKGROUND: #f0f0f0; COLOR: #000000; FONT-SIZE: 12px; OVERFLOW: auto; height:auto"><pre><script type="syntaxhighlighter" class="brush: cpp"><![CDATA[
// 'Hello World!' program
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
]]></script></pre></div><br />
<b>Result:</b><br />
<br />
<script type="syntaxhighlighter" class="brush: cpp">
<![CDATA[
// 'Hello World!' program
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
]]>
</script><br />
<br />
<b>Method 2:</b><br />
<br />
<div style="BORDER: #cccccc 1px dashed; PADDING: 5px; WIDTH: 95%; BACKGROUND: #f0f0f0; COLOR: #000000; FONT-SIZE: 12px; OVERFLOW: auto; height:auto"><pre><pre class="brush: cpp">
// 'Hello World!' program
#include &lt;iostream&gt;
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
</pre></pre></div><br />
<b>Result:</b><br />
<br />
<pre class="brush: cpp">// 'Hello World!' program
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
</pre>
Great! It works! Have fun!
<h2>References:</h2><ol><li><a href="http://alexgorbatchev.com/SyntaxHighlighter/">http://alexgorbatchev.com/SyntaxHighlighter/</a></li>
<li><a href="http://geektalkin.blogspot.de/2009/11/embed-code-syntax-highlighting-in-blog.html">Embed Code Syntax Highlighting in Blog</a></li>
<li><a href="http://www.komku.org/2013/08/how-to-host-javascript-or-css-files-on-google-drive.html">How to Host JavaScript or CSS Files on Google Drive</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com6tag:blogger.com,1999:blog-2525946083367405222.post-56555950833320133902015-05-15T23:51:00.000+02:002015-08-22T21:48:46.071+02:00Install multiple versions of gcc at the same time<h2>Objective</h2><br />
Install gcc/g++ version 4.9 and 5.1 on my current Linux Mint 17.1 Rebecca distribution. Use gcc 4.9 to compile the VirtualBox kernel driver.<br />
<br />
<h2>Motivation</h2><br />
Recently I upgraded my <code>gcc</code> to version 5.1 via the Synaptic package manager as an suggested update. By doing this, my already installed version of <code>gcc</code> version 4.9 got lost. Few days later I updated my VirtualBox 4.3.26 to version 4.3.28. As a consequence of the update, all VirtualBox guest-systems refused to start, by claiming that I have update also the kernel drivers for virtualbox. Ok, no problem, as it gave me the command to do so ... but wait, I am running on an yet not officially supported kernel (as I explained in a previous post). The command updating or let's say compiling the kernel driver for VirtualBox miserably failed, due to a missing header file in gcc 5.1. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.1 Rebecca</li>
<li>Kernel version 3.17.1</a></li>
<li>VirtualBox 4.3.28</a></li>
</ul><br />
<h2>Problem</h2><br />
The command to update the kernel driver for VirtualBox 4.3.26 was:<br />
<br />
<div class="bash">$> sudo /etc/init.d/vboxdrv setup</div><br />
I still just had <code>gcc 5.1</code> installed and unfortunatelly got the following error:<br />
<br />
<div class="code gray-box">Stopping VirtualBox kernel modules ...done.<br />
Uninstalling old VirtualBox DKMS kernel modules ...done.<br />
Trying to register the VirtualBox kernel modules using DKMSError! Bad return status for module build on kernel: 3.17.1-031701-generic (x86_64)<br />
Consult /var/lib/dkms/vboxhost/4.3.28/build/make.log for more information.<br />
...failed!<br />
(Failed, trying without DKMS)<br />
Recompiling VirtualBox kernel modules ...failed!<br />
(Look at /var/log/vbox-install.log to find out what went wrong)</div><br />
The error in the log "<code>/var/lib/dkms/vboxhost/4.3.28/build/make.log</code>" mentioned above was ...<br />
<br />
<div class="code gray-box">LD /var/lib/dkms/vboxhost/4.3.28/build/built-in.o<br />
LD /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/built-in.o<br />
CC [M] /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/linux/SUPDrv-linux.o<br />
In file included from include/linux/compiler.h:54:0,<br />
from /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/include/iprt/types.h:116,<br />
from /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/include/VBox/types.h:30,<br />
from /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/linux/../SUPDrvInternal.h:35,<br />
from /var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/linux/SUPDrv-linux.c:31:<br />
include/linux/compiler-gcc.h:106:30: fatal error: linux/compiler-gcc5.h: No such file or directory<br />
compilation terminated.<br />
make[2]: *** [/var/lib/dkms/vboxhost/4.3.28/build/vboxdrv/linux/SUPDrv-linux.o] Error 1<br />
make[1]: *** [/var/lib/dkms/vboxhost/4.3.28/build/vboxdrv] Error 2<br />
make: *** [_module_/var/lib/dkms/vboxhost/4.3.28/build] Error 2</div><br />
<h2>Solution</h2><br />
<h3>Install multiple versions of gcc</h3><br />
There is a very convenient package called <code>update-alternatives</code> to do the job of installing multiple versions of gcc and select the proper version for your current compile tasks.<br />
<br />
To install just gcc/g++ 4.9 and gcc/g++ 5.1, first get rid of any other version by cleaning up the system:<br />
<br />
<div class="bash">$> sudo update-alternatives --remove-all gcc && sudo update-alternatives --remove-all g++</div><br />
After the command returned you can install your required versions of gcc/g++:<br />
<br />
<div class="bash">$> sudo apt-get install gcc-4.9 gcc-5 g++-4.9 g++-5</div><br />
Be patient because this may take a while if your don't have the packages allready installed.<br />
<br />
If you get an error message like:<br />
<br />
<div class="code gray-box">Paketlisten werden gelesen... Fertig<br />
Abhängigkeitsbaum wird aufgebaut. <br />
Statusinformationen werden eingelesen.... Fertig<br />
Note, selecting 'gcc-4.9-base' for regex 'gcc-4.9'<br />
E: Paket gcc-5 kann nicht gefunden werden.<br />
E: Paket g++-4.9 kann nicht gefunden werden.<br />
E: Mittels regulärem Ausdruck »g++-4.9« konnte kein Paket gefunden werden.<br />
E: Paket g++-5 kann nicht gefunden werden.<br />
E: Mittels regulärem Ausdruck »g++-5« konnte kein Paket gefunden werden.</div><br />
You have to add a new repository <b>"ppa:ubuntu-toolchain-r/test"<br />
</b> source first by exexuting:<br />
<br />
<div class="bash">$> sudo add-apt-repository ppa:ubuntu-toolchain-r/test<br />
$> sudo apt-get update</div><br />
<h3>Install the gcc versions in update-alternatives</h3><br />
To make update-alternatives aware of your installed compilers you have to execute the following additional installer commands:<br />
<br />
<div class="bash">$> sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 10<br />
$> sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 20<br />
<br />
$> sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 10<br />
$> sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 20<br />
<br />
$> sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30<br />
$> sudo update-alternatives --set cc /usr/bin/gcc<br />
<br />
$> sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30<br />
$> sudo update-alternatives --set c++ /usr/bin/g++</div><br />
<h3>Configure update-alternatives</h3><br />
As a last step you have to configure the default commands for gcc, g++:<br />
<div class="bash">$> sudo update-alternatives --config gcc</div><br />
If everything went well, you should be able to interactively choose which C-Compiler version you want to activate:<br />
<br />
<div class="code gray-box">There are 2 choices for the alternative gcc (providing /usr/bin/gcc).<br />
<br />
Selection Path Priority Status<br />
------------------------------------------------------------<br />
* 0 /usr/bin/gcc-5 20 auto mode<br />
1 /usr/bin/gcc-4.9 10 manual mode<br />
2 /usr/bin/gcc-5 20 manual mode<br />
<br />
Press enter to keep the current choice[*], or type selection number: _</div><br />
<br />
The same can be done for the C++-Compiler g++:<br />
<br />
<div class="bash">$> sudo update-alternatives --config g++</div><br />
I've choosen gcc-4.9 and g++-4.9 for the moment.<br />
<br />
<h2>Recompile the kernel module for VirtualBox with gcc 4.9<br />
</h2><br />
Finally, I typed again:<br />
<br />
<div class="bash">$> sudo /etc/init.d/vboxdrv setup</div><br />
... and this time the update was successful:<br />
<br />
<div class="code gray-box">Stopping VirtualBox kernel modules ...done.<br />
Uninstalling old VirtualBox DKMS kernel modules ...done.<br />
Trying to register the VirtualBox kernel modules using DKMS<br />
...done.<br />
Starting VirtualBox kernel modules ...done.<br />
cschmidt@pippin:~$ <br />
cschmidt@pippin:~$ sudo /etc/init.d/vboxdrv setup<br />
Stopping VirtualBox kernel modules ...done.<br />
Uninstalling old VirtualBox DKMS kernel modules ...done.<br />
Trying to register the VirtualBox kernel modules using DKMS ...done.<br />
Starting VirtualBox kernel modules ...done.</div><br />
As a last step I switched back to gcc/g++ 5.1 by the commands given above.<br />
<br />
Have fun!<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com1tag:blogger.com,1999:blog-2525946083367405222.post-75561840903937818402015-04-16T21:31:00.000+02:002015-04-22T00:03:52.540+02:00Install Bash-Shell in favour of Ash-Shell on your Synology<h2>Objective</h2><br />
I wanted to use the <code>bash-shell</code> on my Synology DS209+II. There are no official packages from Synology provided, but I knew there is a way to install custom and optional packages via <code>ipkg</code>.<br />
<br />
<h2>Motivation</h2><br />
As my Synology is mostly running 24/7, I wanted establish a custom download-script which is started and stopped as a cron-job at certain periods of time. The script itself uses some commands relying on a <code>bash-shell</code>, but the Synology default command shell is just the less powerful <code>ash-shell</code>.<br />
<h2><br />
Prerequisites</h2><br />
<ul><li>DS209+II</li>
<li><a href="http://itsonlycode.blogspot.de/2015/04/prepare-your-synology-nas-to-install.html">ipkg - Custom package installer</a></li>
</ul><br />
<h2>Solution</h2><br />
<h3>Open a ssh-connection to your Synology</h3><br />
Log into your Synology as root using ssh (e.g "<code>ssh -l root DiskStation</code>").<br />
<br />
Install the <code>Bash-Shell-Package</code>:<br />
<br />
<div class="bash">DiskStation$> ipkg install -A bash</div><br />
You should see something like this:<br />
<br />
<div class="code gray-box">DiskStation> ipkg install bash<br />
Installing bash (3.2.54-1) to root...<br />
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/bash_3.2.54-1_powerpc.ipk<br />
Installing readline (6.1-2) to root...<br />
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/readline_6.1-2_powerpc.ipk<br />
Installing ncurses (5.7-3) to root...<br />
Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/ncurses_5.7-3_powerpc.ipk<br />
Configuring bash<br />
Configuring ncurses<br />
update-alternatives: Linking //opt/bin/clear to /opt/bin/ncurses-clear<br />
Configuring readline<br />
Successfully terminated.</div><br />
Now, <code>Bash-Shell</code> is installed, but when you log into your Synology it's not yet automatically started. You are still with <code>Ash-Shell</code>.<br />
<br />
<h3>Activate automatic log-in with Bash</h3><br />
You could exchange the shell type for log-in in your <code>/etc/passwd</code> on your Synology, by exchanging the line (here it's done for user: root):<br />
<br />
<div class="code gray-box">root:x:0:0:root:/root:/bin/ash</div><br />
by<br />
<br />
<div class="code gray-box">root:x:0:0:root:/root:/bin/bash</div><br />
Unfortunatelly doing so, has the disadvantage, that you might loose the ability to log into your Synology at all from remote after an upgrade of your Firmware. Because optional packages like <code>Bash</code> are installed into <code>/opt</code> may be inavailable after a system-update. To prevent this accidental lock-out, it's preferred to continue to log-in with <code>Ash-Shell</code>, but start <code>Bash</code> automatically at once after you're successfully logged in.<br />
<br />
To achive this you have to create/edit the file <code>.profile</code> in the homeaccount of the user the should be able to log-in on your Synology. Go to your homeaccount (e.g. <code>/root/</code> for the user: root) and type the following as the proper user on you Synology:<br />
<br />
<div class="bash">DiskStation$> vi .profile</div><br />
If the file has content, just add this lines to it: <br />
<br />
<div class="code gray-box"># ...<br />
<br />
if [[ -x /opt/bin/bash ]]; then<br />
exec /opt/bin/bash<br />
fi</div><br />
That's it. Next time, when <code>root</code> logs into your Synology, he is on a <code>Bash-Shell</code>.<br />
<br />
<h3>Refine Configuration</h3><br />
If you want to use a different command prompt or to have some alias-commands, or at least the proper shell name in your "SHELL" environment variable, it's advisable also to create a <code>.bashrc</code> in the homeaccount, with the following example content (feel free to alter it to your convenience): <br />
<br />
<div class="bash">DiskStation$> vi .bashrc</div><br />
<div class="code gray-box">PS1='\u@\h:\w \$ '<br />
export SHELL=/opt/bin/bash</div><br />
The first line gives you a nice bash prompt. The second explicitly sets the "SHELL" variable to your correct shell.<br />
<br />
If you want also other scripts automatically using <code>Bash</code> instead of <code>Ash</code>, additionally create a symbolic link to it in <code>/bin/</code>:<br />
<br />
<div class="bash">DiskStation$> ln -s /opt/bin/bash /bin/bash</div><br />
<b>ADVICE:</b> Keep a separate root shell window open until you have confirmed all of the changes work.Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com2tag:blogger.com,1999:blog-2525946083367405222.post-55411351312618022672015-04-12T23:56:00.000+02:002017-07-12T21:22:46.985+02:00Prepare your Synology NAS to install custom packages via ipkg<h2>Objective</h2><br />
I wanted to use the <code>bash-shell</code> and a newer <code>wget</code> on my Synology DS209+II. There are no official packages from Synology provided, but I knew there is a way to install custom and optional packages via <code>ipkg</code>. Unfortunatelly ipkg must be so called bootstrapped, because it is already a custom package itself.<br />
<br />
<h2>Motivation</h2><br />
As my Synology is mostly running 24/7, I wanted establish a custom download-script which is started and stopped as a cron-job at certain periods of time. The script itself uses some commands relying on a <code>bash-shell</code>, but the Synology default command shell is just the less powerful <code>ash-shell</code>. Also some of the wget-options I use in that script seem to be broken in the version of <code>wget</code> that my Synology has installed.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>DS209+II</li>
<li>Proper bootstrap script for your Synology NAS</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Download the proper boostrap script</h3><br />
First you have to find out which processor is used on your specific Synology NAS.<br />
<br />
Log into your Synology as root using ssh (e.g "<code>ssh -l root DiskStation</code>") and type the following command:<br />
<br />
<div class="bash">DiskStation$> cat /proc/cpuinfo</div><br />
Doing so on my DS209+II printed the following information:<br />
<br />
<div class="code gray-box">processor : 0<br />
cpu : e500v2<br />
clock : 1066.560000MHz<br />
revision : 2.2 (pvr 8021 0022)<br />
bogomips : 133.32<br />
timebase : 66660000<br />
platform : MPC8544 DS<br />
model : MPC8544DS<br />
Vendor : Freescale Semiconductor<br />
PVR : 0x80210022<br />
SVR : 0x80340011<br />
PLL setting : 0x4<br />
Memory : 512 MB<br />
Memory : 512 MB</div><br />
Now I know I have a <b>Freescale PowerPC (e500v*)</b>, I can download the proper boostrap script <a href="http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/syno-e500-bootstrap_1.2-7_powerpc.xsh">here</a>:<br />
<br />
You can find a list of all bootstrap scripts here (column: "Optware-Pfad/IPKG"):<br />
<a href="http://www.synology-wiki.de/index.php/Welchen_Prozessortyp_besitzt_mein_System%3F">Processors used in Synology NAS Systems (German)</a><br />
<br />
Now download the proper script and copy it to your Synology.<br />
<br />
On the (still open) ssh connection "<code>cd</code>" into the folder where you stored the script and type (script name for your Synology type may differ!):<br />
<br />
<div class="bash">DiskStation$> sh syno-e500-bootstrap_1.2-7_powerpc.xsh</div><br />
After you hit enter and you see something like this:<br />
<br />
<div class="code gray-box">Optware Bootstrap for syno-e500.<br />
Extracting archive... please wait<br />
bootstrap/<br />
bootstrap/bootstrap.sh<br />
bootstrap/ipkg-opt.ipk<br />
bootstrap/ipkg.sh<br />
bootstrap/optware-bootstrap.ipk<br />
bootstrap/wget.ipk<br />
1330+1 records in<br />
1330+1 records out</div><br />
you already have installed an older version of <code>ipkg</code>.<br />
<br />
Now you first have to remove this version, before installing the new one.<br />
<br />
<h3>Backup your old ipkg configuration (only if you already have ipkg on your system)</h3><br />
If you have already installed <code>ipkg</code> and other custom packages via ipkg then make a backup of the following folders on you Synology:<br />
<br />
<ul><li>/volume1/@optware</li>
<li>/usr/lib/ipkg</li>
</ul><br />
then remove all existing optware packages:<br />
<br />
<div class="bash">DiskStation$> rm -rf /volume1/@optware && rm -rf /usr/lib/ipkg</div><br />
Afterwards, you must <b>*reboot your Synology*</b> and then restart the bootstrap script.<br />
<br />
<h3>Re-Install ipkg</h3><br />
Again, log-into your Synology via ssh as root and type:<br />
<br />
<div class="bash">DiskStation$> sh syno-e500-bootstrap_1.2-7_powerpc.xsh</div><br />
Now you should see a full install log:<br />
<br />
<div class="code gray-box">Optware Bootstrap for syno-e500.<br />
Extracting archive... please wait<br />
bootstrap/<br />
bootstrap/bootstrap.sh<br />
bootstrap/ipkg-opt.ipk<br />
bootstrap/ipkg.sh<br />
bootstrap/optware-bootstrap.ipk<br />
bootstrap/wget.ipk<br />
1330+1 records in<br />
1330+1 records out<br />
Creating temporary ipkg repository...<br />
Installing optware-bootstrap package...<br />
Unpacking optware-bootstrap.ipk...Done.<br />
Configuring optware-bootstrap.ipk...Setting up ipkg arch-file<br />
Done.<br />
Installing ipkg...<br />
Unpacking ipkg-opt.ipk...Done.<br />
Configuring ipkg-opt.ipk...WARNING: can't open config file: /usr/syno/ssl/openssl.cnf<br />
Done.<br />
Removing temporary ipkg repository...<br />
Installing wget...<br />
Installing wget (1.12-2) to root...<br />
Configuring wget<br />
Successfully terminated.<br />
Creating /opt/etc/ipkg/cross-feed.conf...<br />
Setup complete.</div><br />
Update your <code>$PATH</code> variable, so that the <code>ipkg</code> can be found after reboot.<br />
Therefore open the file <code>$HOME/.profile</code> and edit the line with your <code>PATH</code>:<br />
<br />
<div class="code gray-box">PATH=/opt/bin:/opt/sbin:[the content that was already there]</div><br />
Finally it's recommended to update your <code>ipkg-package</code> to ensure to use the newest version:<br />
<br />
<div class="bash">DiskStation$> ipkg update</div><br />
<div class="code gray-box">Downloading http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/Packages.gz<br />
Inflating http://ipkg.nslu2-linux.org/feeds/optware/syno-e500/cross/unstable/Packages.gz<br />
Updated list of available packages in /opt/lib/ipkg/lists/cross<br />
Successfully terminated.</div><br />
If you use "<code>ipkg upgrade</code>" instead of "<code>ipkg update</code>", also all already installed custom packages are upgraded, too.<br />
<br />
Boostrap done. Now you can install optional packages via the <code>ipkg</code> command on your Synology.<br />
<br />
<h2>References:</h2><ol><li><a href="https://www.naschenweng.info/2012/01/17/synology-dsm-4-0-beta-breaks-ipkg/">https://www.naschenweng.info/2012/01/17/synology-dsm-4-0-beta-breaks-ipkg/</a></li>
</ol>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-62669467469762919662015-03-07T23:44:00.001+01:002015-03-07T23:47:25.087+01:00Linux Mint: Install new kernel version and update Virtual Box kernel module<h2>Objective</h2><br />
I want to install the latest Linux Kernel available for my system. I also want still to work with my already installed VirtualBox which needs to install a kernel module after a kernel switch.<br />
<br />
<h2>Motivation</h2><br />
There was no real resaon behind my plans to install the Linux Kernel 3.17.1. I just thought it's time for an upgrade. My current kernel is 3.13. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.1 Rebecca</li>
<li>Oracle VM VirtualBox Manager 4.3.24</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Install the Linux Kernel packages</h3><br />
To check which kernel packages are currently available you can do a search like (filtered by version "3.17"):<br />
<br />
<div class="bash">$> apt-cache search linux- | grep 3.17</div><br />
you'll see some output like this:<br />
<br />
<div class="code gray-box">linux-headers-3.17.1-031701 - Header files related to Linux kernel version 3.17.1<br />
linux-headers-3.17.1-031701-generic - Linux kernel headers for version 3.17.1 on 64 bit x86 SMP<br />
linux-image-3.17.1-031701-generic - Linux kernel image for version 3.17.1 on 64 bit x86 SMP<br />
</div><br />
For 64-Bit systems download the following packages ...<br />
<br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-headers-3.17.1-031701_3.17.1-031701.201410150735_all.deb<br />
</div><br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-headers-3.17.1-031701-generic_3.17.1-031701.201410150735_amd64.deb</div><br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-image-3.17.1-031701-generic_3.17.1-031701.201410150735_amd64.deb</div><br />
and for 32-Bit systems download the following ...<br />
<br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-headers-3.17.1-031701_3.17.1-031701.201410150735_all.deb</div><br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-headers-3.17.1-031701-generic_3.17.1-031701.201410150735_i386.deb</div><br />
<div class="bash">$> wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17.1-utopic/linux-image-3.17.1-031701-generic_3.17.1-031701.201410150735_i386.deb</div><br />
Now install the packages:<br />
<br />
<div class="bash">$> sudo dpkg -i linux-headers-3.17.1*.deb linux-image-3.17.1*.deb</div><br />
Reboot the system.<br />
<br />
<div class="bash">$> sudo reboot</div><br />
After a successful reboot, you can delete the *.deb packages again:<br />
<br />
<div class="bash">$> rm linux-*</div><br />
<br />
<h3>Uninstall the Linux Kernel again (if you don't like it anymore)<br />
</h3><br />
You can uninstall the new kernel with the following command, be aware that this may make your system unusable.<br />
<br />
<div class="bash">$> sudo apt-get remove 'linux-headers-3.17.1*' 'linux-image-3.17.1*'</div><br />
<h3>Setup the VirtualBox Kernel module</h3><br />
<div class="bash">$> sudo /etc/init.d/vboxdrv setup</div><br />
You'll see output similar to:<br />
<br />
<div class="code gray-box">Stopping VirtualBox kernel modules ...done.<br />
Uninstalling old VirtualBox DKMS kernel modules ...done.<br />
Trying to register the VirtualBox kernel modules using DKMS ...done.<br />
Starting VirtualBox kernel modules ...done.<br />
</div><br />
Now you are done!<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-83266767890424594552015-03-07T00:57:00.000+01:002015-03-07T00:58:01.255+01:00Linux Mint: Install and configure OpenSSH<h2>Objective</h2><br />
It should be possible to log-into my new desktop computer from my laptop via ssh.<br />
<br />
<h2>Motivation</h2><br />
Sometimes I want to run a script or program on my more powerful desktop computer, also while I am just sitting in front of my laptop. Furthermore I want to be able to configure my desktop computer from remote. To achive this I need to install and configure openssh inclusive X11 forwarding.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.1 Rebecca</li>
<li>A second computer or laptop with a ssh-client e.g. putty installed</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Install the OpenSSH packages</h3><br />
Open a terminal. Download and install the openssh server and client package:<br />
<br />
<div class="bash">$> sudo apt-get install openssh-server openssh-client</div><br />
you'll see some output like this:<br />
<br />
<div class="code gray-box">Reading package lists... Done<br />
Building dependency tree <br />
Reading state information... Done<br />
openssh-client is already the newest version.<br />
Suggested packages:<br />
rssh molly-guard monkeysphere<br />
Recommended packages:<br />
ncurses-term ssh-import-id<br />
The following NEW packages will be installed:<br />
openssh-server openssh-sftp-server<br />
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.<br />
Need to get 354 kB of archives.<br />
After this operation, 1.072 kB of additional disk space will be used.<br />
Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-sftp-server amd64 1:6.6p1-2ubuntu2 [34,1 kB]<br />
Get:2 http://archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-server amd64 1:6.6p1-2ubuntu2 [319 kB]<br />
Fetched 354 kB in 0s (781 kB/s) <br />
Preconfiguring packages ...<br />
Selecting previously unselected package openssh-sftp-server.<br />
(Reading database ... 159066 files and directories currently installed.)<br />
Preparing to unpack .../openssh-sftp-server_1%3a6.6p1-2ubuntu2_amd64.deb ...<br />
Unpacking openssh-sftp-server (1:6.6p1-2ubuntu2) ...<br />
Selecting previously unselected package openssh-server.<br />
Preparing to unpack .../openssh-server_1%3a6.6p1-2ubuntu2_amd64.deb ...<br />
Unpacking openssh-server (1:6.6p1-2ubuntu2) ...<br />
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...<br />
Processing triggers for ureadahead (0.100.0-16) ...<br />
ureadahead will be reprofiled on next reboot<br />
Processing triggers for ufw (0.34~rc-0ubuntu2) ...<br />
Setting up openssh-sftp-server (1:6.6p1-2ubuntu2) ...<br />
Setting up openssh-server (1:6.6p1-2ubuntu2) ...<br />
Creating SSH2 RSA key; this may take some time ...<br />
Creating SSH2 DSA key; this may take some time ...<br />
Creating SSH2 ECDSA key; this may take some time ...<br />
Creating SSH2 ED25519 key; this may take some time ...<br />
ssh start/running, process 3190<br />
Processing triggers for ureadahead (0.100.0-16) ...<br />
Processing triggers for ufw (0.34~rc-0ubuntu2) ...<br />
</div><br />
<h3>Configure OpenSSH</h3><br />
The default configuration for OpenSSh on Linux Mint Rebecca located at <code>/etc/ssh/sshd_config</code> should already work fine without any further adjustment. The only limitation is that yout cannot log in as <code>root</code> by default.<br />
<br />
If you want to allow remote login as <code>root</code>, open a shell and edit <code>/etc/ssh/sshd_config</code>:<br />
<div class="bash">$> gksu gedit /etc/ssh/sshd_config</div><br />
Now change the following line to:<br />
<br />
<div class="code gray-box">PermitRootLogin yes</div><br />
Now you are done and can do a test log-in.<br />
<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-47773938780619127752015-02-28T21:22:00.001+01:002015-04-22T00:08:34.924+02:00Linux Mint: Setup autofs to mount automatically NFS-shares from a Synology<h2>Objective</h2><br />
I installed Linux Mint 17.1 Rebecca on a new computer and I want to access the NFS shares from my Synology Disk Station.<br />
<br />
<h2>Motivation</h2><br />
As my Synology is not always running 24/7 it would be nice just to mount the NFS shares on my client computers on access and to avoid to integrate the shares statically in fstab. Integrating the shares in fstab would work, but I will always have to wait until a timeout is fullfilled while booting my computer, when the Synology is not running. I already set-up <code>autofs</code> for this "mount on demand" purpose on my laptop, which eves not always in the same network as my Synology. <br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.1 Rebecca</li>
<li>DS209+II</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Install the autofs package</h3><br />
Download and install the autofs package via the following command from a shell:<br />
<br />
<div class="bash">$> sudo apt-get install autofs</div><br />
you'll see some output like this:<br />
<br />
<div class="code gray-box">Reading package lists... Done<br />
Building dependency tree <br />
Reading state information... Done<br />
Recommended packages:<br />
nfs-common<br />
The following NEW packages will be installed:<br />
autofs<br />
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.<br />
Need to get 281 kB of archives.<br />
After this operation, 1.671 kB of additional disk space will be used.<br />
Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main autofs amd64 5.0.7-3ubuntu3.1 [281 kB]<br />
Fetched 281 kB in 0s (496 kB/s)<br />
Selecting previously unselected package autofs.<br />
(Reading database ... 158913 files and directories currently installed.)<br />
Preparing to unpack .../autofs_5.0.7-3ubuntu3.1_amd64.deb ...<br />
Unpacking autofs (5.0.7-3ubuntu3.1) ...<br />
Processing triggers for ureadahead (0.100.0-16) ...<br />
ureadahead will be reprofiled on next reboot<br />
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...<br />
Setting up autofs (5.0.7-3ubuntu3.1) ...<br />
Creating config file /etc/auto.master with new version<br />
Creating config file /etc/auto.net with new version<br />
Creating config file /etc/auto.misc with new version<br />
Creating config file /etc/auto.smb with new version<br />
Creating config file /etc/default/autofs with new version<br />
autofs start/running, process 3481<br />
Processing triggers for ureadahead (0.100.0-16) ...</div><br />
Additionally install the nfs-common package, otherwise you won't be able to access the NFS shares.<br />
<br />
<div class="bash">$> sudo apt-get install nfs-common</div><br />
<div class="code gray-box">Reading package lists... Done<br />
Building dependency tree <br />
Reading state information... Done<br />
The following extra packages will be installed:<br />
libgssglue1 libnfsidmap2 libtirpc1 rpcbind<br />
Suggested packages:<br />
open-iscsi watchdog<br />
The following NEW packages will be installed:<br />
libgssglue1 libnfsidmap2 libtirpc1 nfs-common rpcbind<br />
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.<br />
Need to get 342 kB of archives.<br />
After this operation, 1.375 kB of additional disk space will be used.<br />
Do you want to continue? [Y/n] Y<br />
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main libgssglue1 amd64 0.4-2ubuntu1 [19,7 kB]<br />
Get:2 http://archive.ubuntu.com/ubuntu/ trusty/main libnfsidmap2 amd64 0.25-5 [32,2 kB]<br />
Get:3 http://archive.ubuntu.com/ubuntu/ trusty/main libtirpc1 amd64 0.2.2-5ubuntu2 [71,3 kB]<br />
Get:4 http://archive.ubuntu.com/ubuntu/ trusty-updates/main rpcbind amd64 0.2.1-2ubuntu2.1 [37,0 kB]<br />
Get:5 http://archive.ubuntu.com/ubuntu/ trusty-updates/main nfs-common amd64 1:1.2.8-6ubuntu1.1 [182 kB]<br />
Fetched 342 kB in 1s (288 kB/s) <br />
Selecting previously unselected package libgssglue1:amd64.<br />
(Reading database ... 158963 files and directories currently installed.)<br />
Preparing to unpack .../libgssglue1_0.4-2ubuntu1_amd64.deb ...<br />
Unpacking libgssglue1:amd64 (0.4-2ubuntu1) ...<br />
Selecting previously unselected package libnfsidmap2:amd64.<br />
Preparing to unpack .../libnfsidmap2_0.25-5_amd64.deb ...<br />
Unpacking libnfsidmap2:amd64 (0.25-5) ...<br />
Selecting previously unselected package libtirpc1:amd64.<br />
Preparing to unpack .../libtirpc1_0.2.2-5ubuntu2_amd64.deb ...<br />
Unpacking libtirpc1:amd64 (0.2.2-5ubuntu2) ...<br />
Selecting previously unselected package rpcbind.<br />
Preparing to unpack .../rpcbind_0.2.1-2ubuntu2.1_amd64.deb ...<br />
Unpacking rpcbind (0.2.1-2ubuntu2.1) ...<br />
Selecting previously unselected package nfs-common.<br />
Preparing to unpack .../nfs-common_1%3a1.2.8-6ubuntu1.1_amd64.deb ...<br />
Unpacking nfs-common (1:1.2.8-6ubuntu1.1) ...<br />
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...<br />
Processing triggers for ureadahead (0.100.0-16) ...<br />
Setting up libgssglue1:amd64 (0.4-2ubuntu1) ...<br />
Setting up libnfsidmap2:amd64 (0.25-5) ...<br />
Setting up libtirpc1:amd64 (0.2.2-5ubuntu2) ...<br />
Setting up rpcbind (0.2.1-2ubuntu2.1) ...<br />
Removing any system startup links for /etc/init.d/rpcbind ...<br />
rpcbind start/running, process 5972<br />
Processing triggers for ureadahead (0.100.0-16) ...<br />
Setting up nfs-common (1:1.2.8-6ubuntu1.1) ...<br />
Creating config file /etc/idmapd.conf with new version<br />
Creating config file /etc/default/nfs-common with new version<br />
Adding system user `statd' (UID 115) ...<br />
Adding new user `statd' (UID 115) with group `nogroup' ...<br />
Not creating home directory `/var/lib/nfs'.<br />
statd start/running, process 6205<br />
gssd stop/pre-start, process 6239<br />
idmapd start/running, process 6286<br />
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...<br />
Processing triggers for ureadahead (0.100.0-16) ...</div><br />
<h3>Configure autofs</h3><br />
First create the destination directory where you want to mount the directories of your NFS server into. In my case this is below <code>/mnt/DiskStation<code><br />
<br />
</code></code><br />
<div class="bash">$> sudo mkdir -p /mnt/DiskStation</div><br />
Now add your exported NFS-Shares root-directory from your server to the <code>"/etc/auto.master"</code>:<br />
<br />
<div class="code gray-box">/mnt/DiskStation /etc/auto.nfs</div><br />
<div class="bash">$> gksu gedit /etc/auto.master</div><br />
<div class="code gray-box">#<br />
# Sample auto.master file<br />
#<br />
# ...<br />
#<br />
#/misc /etc/auto.misc<br />
#/net -hosts<br />
#<br />
# Include /etc/auto.master.d/*.autofs<br />
#<br />
+dir:/etc/auto.master.d<br />
#<br />
# Include central master map if it can be found using<br />
# nsswitch sources.<br />
#<br />
# ...<br />
#<br />
+auto.master<br />
/mnt/DiskStation /etc/auto.nfs</div><br />
Create a <code>"/etc/auto.nfs"</code> with the following content (where "192.168.0.99" is the IP of your Synology and "data" is the name of the exported directory):<br />
<div class="code gray-box">data 192.168.0.99:/volume1/data</div><br />
Finally restart <code>autofs</code><br />
<br />
<div class="bash">$> sudo service autofs restart</div><br />
<div class="code gray-box">autofs stop/waiting<br />
autofs start/running, process 4084</div><br />
Done. Now you should be able to access the files from your Synology at <code>/mnt/DiskStation/data</code>.<br />
<br />
<br />
More information on autofs can be found <a href="http://wiki.ubuntuusers.de/Autofs">here</a>.<br />
</code></code>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-42651336296567455492015-02-28T19:15:00.001+01:002015-02-28T19:23:42.809+01:00Linux Mint: Move your home-directory into a separate partition after installation<h2>Objective</h2><br />
I installed Linux Mint 17.1 Rebecca on new computer with a SSD with the Mint Installer default partitioning scheme, which mounts <code>/boot</code>, <code>/root</code> and <code>/home</code> into the same partition.<br />
<br />
<h2>Motivation</h2><br />
I decided to have my home-directory in a separate partition which I can mount into <code>/home</code> to make leter upgrading the system a little more painless.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17.1 Rebecca</li>
<li>512 GB SSD</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Create a new partition</h3><br />
Resize your system partition and create a new partition in the free space. Follow <a href="http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/">this guide</a> to see how you can resize Ubuntu partitions to complete this step.<br />
<br />
<h3>Copy the home files into the new partition</h3><br />
Open a terminal and run the following command to create a copy of your current <code>/home</code> directory on the new partition, where <code>/media/HOME</code> is the location of your newly mounted partition (I gave it the <code>LABEL=HOME</code> during the creation process) where the new <code>/home</code> should reside:<br />
<br />
<div class="bash">$> sudo cp -Rp /home/* /media/HOME</div><br />
You should check if everything went fine to avoid to loose data:<br />
<br />
<div class="bash">$> ls /media/HOME</div><br />
In my case I got:<br />
<br />
<div class="code gray-box">csch data lost+found</div><br />
<h3>Determine the UUID of the newly created partition</h3><br />
Use the following command to get the UUID of your new home-partition:<br />
<br />
<div class="bash">$> sudo blkid</div><br />
In my case I got (you can see the label HOME again here)<br />
<br />
<div class="code gray-box">...<br />
/dev/sda2: UUID="f270b74b-ce14-4481-bf32-1226b4fd776e" TYPE="ext4" <br />
/dev/sda3: LABEL="HOME" UUID="a9c81163-f588-462a-89b0-dbdad87cef9c" TYPE="ext4" <br />
/dev/sda6: UUID="023cf9e6-199f-475c-9fe1-c70b73d3047c" TYPE="swap" <br />
...</div><br />
<h3>Adapt your mount table in fstab to mount the new partition into "/home"</h3><br />
Make a backup of your current fstab (with a timestamp):<br />
<br />
<div class="bash">$> sudo cp /etc/fstab /etc/fstab.$(date +%Y-%m-%d)_backup</div><br />
and edit the original fstab:<br />
<br />
<div class="bash">$>gksu gedit /etc/fstab</div><br />
Add this line to your fstab and save the file (replace xxxxx with your UUID):<br />
<br />
<div class="code gray-box">...<br />
# (identifier) (location) (format, eg ext3 or ext4) (some settings) <br />
UUID=xxxxx /home ext4 nodev,nosuid 0 2 </div><br />
<h3>Move home-directory into a backup and create a new mount-directory</h3><br />
<div class="bash">$> cd / && sudo mv /home /old_home && sudo mkdir /home</div><br />
Now you're done, finally reboot and prey!<br />
<br />
<div class="bash">$> sudo shutdown -r now</div><br />
After your system is up again, you can savely clean-up the system:<br />
<br />
<div class="bash">$> sudo rm -rf /home_old</div><br />
<h3>Remark</h3>Additional info on how to deal with separate home-partitions can be found <a href="https://help.ubuntu.com/community/Partitioning/Home/Moving">here</a>.<br />
<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com4tag:blogger.com,1999:blog-2525946083367405222.post-34069275320219962502014-12-20T18:58:00.002+01:002014-12-20T20:30:17.658+01:00Linux Mint: Install Xbian on Raspberry Pi<h2>Objective</h2><br />
The following decription explains how to install a Xbian image on a SDCard to run on your Raspberry Pi. The installation is done on Linux.<br />
<br />
<h2>Motivation</h2><br />
Recently, I bought a Raspberry Pi and wanted to connect to my TV to watch movies stored on my NAS via Wifi connection or Youtube videos.<br />
<br />
<h2>Prerequisites</h2><br />
<ul><li>Linux Mint 17 Qiana</li>
<li>Raspberry Pi B model (both 512 MB and 256 MB version are supported)</li>
<li>2 GB (or bigger) SD card</li>
<li>Power adapter for your Raspbery Pi</li>
<li>Something to play your media from (USB disk or network share)</li>
<li>Remote, for example your TV remote (if your TV supports CEC), smartphone (XBMC Remote app), infrared remote, keyboard/mouse</li>
<li>Computer with a SD card reader for installing XBian on your SD card</li>
<li>Ethernet cable or WiFi dongle for your Raspberry Pi</li>
</ul><br />
<h2>Solution</h2><br />
<h3>Preparation of the Xbian image file</h3><br />
Go to the XBian images <a href="http://www.xbian.org/getxbian">download</a> section and select the newest XBian image for your download. In my case this was the <a href="http://sourceforge.net/projects/xbian/files/release/XBian_1.0_RC_3_Raspberry_Pi.img.gz/download">XBian 1.0 Release Candidate 3</a>.<br />
<br />
Open a command shell and go (via 'cd' command) to the folder where you downloaded the image file.<br />
<br />
Uncompress the image by the command:<br />
<br />
<div class="bash">$> gunzip XBian_1.0_RC_3_Raspberry_Pi.img.gz</div><br />
<h3>Preparation of the SDCard</h3><br />
Insert your SDCard into the SDCard slot of your computer. The card usually will be automatically mounted and integrated into the filesystem of your computer. If this is not the case, you can also mout ist via the shell command:<br />
<br />
<div class="bash">$> sudo mount -t vfat -o ro /dev/mmcblk0p1 /media/mmcblk0p1</div><br />
The <b>mmcblk0p1</b> here is the device identifier for your SDCard and may vary in your case. To determine the device identifier, you can use the command:<br />
<br />
<div class="bash">$> sudo fdisk -l</div><br />
you will get a result similar to mine:<br />
<br />
<div class="code gray-box">Disk /dev/sda: 640.1 GB, 640135028736 bytes<br />
<br />
<skip the info of the harddisk><br />
<br />
Disk /dev/mmcblk0: 15.9 GB, 15931539456 bytes<br />
4 Köpfe, 16 Sektoren/Spur, 486192 Zylinder, zusammen 31116288 Sektoren<br />
Einheiten = Sektoren von 1 × 512 = 512 Bytes<br />
Sector size (logical/physical): 512 bytes / 512 bytes<br />
I/O size (minimum/optimal): 512 bytes / 512 bytes<br />
Festplattenidentifikation: 0x0009be7b<br />
<br />
Gerät boot. Anfang Ende Blöcke Id System<br />
/dev/mmcblk0p1 2048 31116287 15557120 b W95 FAT32</div><br />
Copy the downloaded image onto the SDCard:<br />
<br />
<div class="bash">$> sudo dd if=XBian_1.0_RC_3_Raspberry_Pi.img of=/dev/mmcblk0</div><br />
The copying procedure takes some minutes, so be patient...<br />
<br />
The final output after a successful copy will be something like:<br />
<br />
<div class="code gray-box">1135488+0 Datensätze ein<br />
1135488+0 Datensätze aus<br />
581369856 Bytes (581 MB) kopiert, 366,661 s, 1,6 MB/s</div><br />
Setup done!<br />
Now you can remove the SDCard from your computer an put it into your Raspberry Pi and restart the Pi.Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-32266140125121765682014-12-20T00:38:00.001+01:002014-12-23T22:53:55.429+01:00Upgrade your Qt-Installation to Qt 5.4 by a clean re-install<h2>Objective</h2><br />
I wanted to upgrade my <b>Qt 5.3.1</b> to new the newly released version <b>Qt 5.4.0</b> by a clean re-install.<br />
My last install was done by executing the installer script <b>./qt-opensource-linux-x64-5.3.1.run</b>.<br />
So I asked myself, how to clearly uninstall an old Qt-Version.<br />
<br />
<h2>Motivation</h2><br />
I noticed that I'll run out of disk-space soon on my system disk, therefore I decided to uninstall the old version, before I wanted to install the new version. Additionally, I wanted to install <b>QtCreator 3.3</b> which correlates to <b>Qt 5.4.0</b>.<br />
<br />
<h2>Prerequisites</h2><br />
<b>Linux Mint</b>: 17 - Qiana<br />
<b>Qt-Installer Script</b>: Qt-opensource-linux-x64-5.4.0.run<br />
<br />
<h2>Solution</h2><br />
<h3>Download the installer script</h3><br />
1. Go to the Qt-Homepage: <a href="https://qt-project.org/">qt-project.org</a><br />
2. Download the installer script ./qt-opensource-linux-x64-5.4.0.run<br />
It is not necessary to download the Qt-Creator installer script, because <b>Qt-Creator 3.3</b> is included by the Qt-Installer script.<br />
<br />
<h3>Uninstall the old version (in my case: Qt 5.3.1)</h3><br />
1. Open a Terminal window.<br />
<br />
2. Login as root:<br />
<br />
<div class="bash">$ su -</div><br />
3. Navigate to the folder where you installed the older version of Qt (in my case <code>"/opt/Qt5.3.1"</code>:<br />
<br />
<div class="bash">$ cd /opt/Qt5.3.1</div><br />
4. Start the <b>MaintenanceTool</b><br />
<br />
<div class="bash">$ ./MaintenanceTool</div><br />
5. Choose <b>Remove</b> from the options od the shown dialog.<br />
<br />
<h3>Install <b>Qt 5.4.0</b></h3><br />
1. Within the same shell (root-shell), go to the folder where you downloaded the installer script <b>qt-opensource-linux-x64-5.4.0.run</b> and type:<br />
<br />
<div class="bash">$ chmod ugo+x qt-opensource-linux-x64-5.4.0.run<br />
$ ./qt-opensource-linux-x64-5.4.0.run</div><br />
2. Follow the instructions of the graphical installer.<br />
<br />
Done.Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-32362979110711011232013-11-16T20:39:00.001+01:002014-12-23T22:59:16.006+01:00Linux Mint - Mate Desktop: Configure transmission-gtk to handle magnet-links in Google Chrome<h2>Objective</h2><br />
The goal of this tutorial is to make <b>Google Chrome</b> to automatically to start <b>transmission-gtk</b> when you click on a <b>magnet-link</b> within your browser.<br />
<br />
<h2>Motivation</h2><br />
I was using Firefox (and previously the Mozilla Suite) as my favorite Webbrowser for nearly everthing (on Linux, Window and Mac). Recently I experimented with Google Chrome and figured out that it's also a nice alternative to Firefox and in many cases much faster. But one of the drawbacks I encountered on Linux was, that it seems to handle foreign or unknown protocols differently than Firefox. It merely relies on "xdg-open", which is not always configured correctly, for any desktop environments, as in my case.<br />
<br />
<h2>Prerequisites</h2><br />
<b>Linux Mint</b> 15 - Olivia<br />
Desktop - <b>MATE</b> 1.6<br />
<b>Google Chrome</b> (v31.0.1650.57)<br />
<b>transmission-gtk</b> (v2.77-14031)<br />
<b>xdg-open</b> (v1.0.2)<br />
<br />
It might be that the problem occurs also with other Distributions and Versions, but the above is just my current environment.<br />
<br />
<h2>Solution</h2><br />
<h3>Reproduce the problem</h3><br />
1. Start Google Chrome<br />
2. Navigate to an internet site that provides a magnet-link<br />
3. Click on the link<br />
4. If Google Chrome opens just another window or tab, you face the problem<br />
<br />
<h3>Solve the problem</h3><br />
In contrary to Firefox which handles all the management of external protocol-handlers itself, Google Chrome relies on the underlying system. In this particular environment it's the "xdg-open" script. Unfortunatelly this script does not support MATE a native Desktop environment and therefore calls the "general-handler" for urls which seems to be Google Chrome itself.<br />
<br />
In this solution, we establish the external app "transmission-gtk" the to handle magnet-links within Google-Chrome.<br />
<br />
1.<br />
Check where the "<b>transmission-gtk.desktop</b>" file can be found. E.g. use the command "<b>locate transmission-gtk.desktop</b>". In my case it's located in "<b>/usr/share/applications/</b>".<br />
<br />
2.<br />
Now check the content, by opening the file with an editor of your choice. Be sure that you find the statements<br />
<br />
<code><b><span style="color: red;">Exec=transmission-gtk %U</span></b></code><br />
<br />
and<br />
<br />
<code><b><span style="color: red;">MimeType=application/x-bittorrent;x-scheme-handler/magnet;</span></b></code><br />
<br />
in the file. Ensure that the first statement contains "<b>%U</b>", because this the placeholder for the concrete URL passed through by Google Chrome.<br />
<br />
3.<br />
Configure your system that "transmission-gtk" is the default handler for magnet-links by executing the following command in your shell:<br />
<br />
<div class="bash">$ xdg-mime default transmission-gtk.desktop x-scheme-handler/magnet</div><br />
4.<br />
Enable "xdg-open" to recognize your MATE desktop as a Gnome Desktop, because it has the same ancestor and therefore is compatible to Gnome, but not recognized in the "xdg-open" script, because of the different name.<br />
<br />
For this step you need root permission, so be careful what you are doing.<br />
Locate the xdg-open script in your system: "which xdg-open". In my case this leads to "/usr/bin/xdg-open".<br />
Open the file as root (again with your editor of choice) e.g. "sudo vi /usr/bin/xdg-open".<br />
Search for a section (<b>Note: The following part is just hack and not a solid solution</b>):<br />
<br />
<div class="code gray-box">if [ x"$DE" = x"" ]; then<br />
DE=<b>generic</b><br />
fi<br />
</div><br />
and change it to <br />
<br />
<div class="code gray-box">if [ x"$DE" = x"" ]; then<br />
DE=<b><span style="color: red;">gnome</span></b><br />
fi<br />
</div><br />
Now, save the file.<br />
<br />
5.<br />
Restart Gnome and retry to reproduce the problem. It should be gone now and transmission-gtk should be opened to handle magnet-links instead of Google Chrome.Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com0tag:blogger.com,1999:blog-2525946083367405222.post-2164457127896852262013-08-02T21:01:00.002+02:002014-12-23T23:20:22.153+01:00Ruby Version Management: Using rbenv in favor of rvm<h2>Objective</h2><br />
As I am still considering myself as a beginner with the Ruby programming language. I'd like to write a little "HowTo" on installing and managing the versions of the Ruby interpreter on my local machine as a personal reminder.<br />
<br />
Since I always used the "Ruby Version Manager (<a href="https://rvm.io/"><span style="color: blue;">rvm</span></a>)" for doing the job, I decided this time to take the chance with ruby 2.0 and rails 4.0 to experiment with "Ruby Environment (<a href="https://github.com/sstephenson/rbenv"><span style="color: blue;">rbenv</span></a>)" which is also recommended on the <a href="http://rubyonrails.org/"><span style="color: blue;">Ruby on Rails</span></a> homepage to be used as a substitute for rvm. With using rvm in the past I also always got confused by system installation using sudo and installation just for the current user. Hopefully, I can escape this mess using rbenv.<br />
<br />
<h2>Prerequisites</h2><br />
At the moment, I am sitting in front of my MacBookAir with an installed Mountain Lion OS X (10.8.x) and only the default Ruby version running, which is "<b>ruby 1.8.7</b>".<br />
<br />
To start the process of installation, I moved on from the RoR homepage to<br />
<br />
<a href="https://github.com/sstephenson/rbenv"><span style="color: blue;">https://github.com/sstephenson/rbenv</span></a><br />
<br />
where I expected to find some instructions to start with.<br />
<br />
Yeah! <b>The documentation there seems to cover all the aspects of rbenv</b>. So I'll just leave it by the <a href="https://github.com/sstephenson/rbenv#groom-your-apps-ruby-environment-with-rbenv"><span style="color: blue;">link to the documentation</span></a> and only write about my experience installing rbenv and ruby 2.0.<br />
<br />
<h2>Installation</h2><div><br />
<h3>Additional Prerequisites</h3><br />
After reading the instructions, I decided to install rbenv via the source distribution on github instead of using homebrew. While starting the process I figured out that I even didn't install the Mac Vi Editor "<code>mvim</code>" yet, but still had an "<code>alias vi='~/ApplicationsMacVim-snapshot63/mvim'</code>" in my "<code>~/.bash_profile</code>". Must have been some leftover from a previous MacOS X installation. Therefore I quickly jumped to the developers site of <a href="http://code.google.com/p/macvim/"><span style="color: blue;">macvim</span></a> and installed the proper version. After adjusting the 'alias-command' I could proceed with installing rbenv.<br />
<br />
Note: Ensure that after changing the "alias" you restart your shell or at least re-read your <code>"~/.bash_profile"</code> by the command "<code>source ~/.bash_profile</code>". With the command "<code>alias</code>" you can list all "aliases" and check if everything is correct.<br />
<br />
To avoid confusion in the next following paragraphs "Step X" always refers to the number of the step from the original instructions of installing rbenv.<br />
<br />
<h3>Step 2</h3><br />
I also couldn't follow Step 2 of the installation process, because I am already using a customized <code>~/.bash_profile</code> file and the instructions just will put another export statement for the <code>$PATH</code> variable at the end of the <code>~/.bash_profile</code>. Therefore, I adjusted my <code>~/.bash_profile</code> manually by exchanging the following line<br />
<br />
<div class="code gray-box">export PATH=${PATH}:${HOME}/bin</code></div><br />
by<br />
<br />
<div class="code gray-box">export PATH=$HOME/.rbenv/bin:${PATH}:${HOME}/bin</div><br />
<h3>Step 3</h3><br />
I also decided to add Step 3 handcrafted, keeping in mind that the '<code>export PATH=...</code>' must appear earlier in the file than '<code>eval "$(rbenv init -)"'.</code><br />
<br />
<h3>Step 5</h3><br />
At Step 5, I switched over to the installation guide for <code>ruby_build</code> plugin, which surprisingly was written by the same author as rbenv, <a href="https://github.com/sstephenson"><span style="color: blue;">Sam Stephenson</span></a>. The installation as a plugin for rbenv worked like a charm, so I thought I am able to install ruby 2.0 now.<br />
<br />
Hum, but how do I have to choose the ruby version exactly?<br />
<br />
Just typing "<code>rbenv install</code>" without any parameter gives an overview about the options that can be used with the install command and as I already expected, there was an option to list all available Ruby versions: "<code>rbenv install -l</code>" which gave me ...<br />
<br />
<div class="code gray-box">2.0.0-dev<br />
2.0.0-p0<br />
2.0.0-p195<br />
2.0.0-p247<br />
2.0.0-preview1<br />
2.0.0-preview2<br />
2.0.0-rc1<br />
2.0.0-rc2<br />
</div><br />
Oops! It seems that I am doomed!<br />
<br />
<b>Which version to install?</b><br />
<b>What is this versioning scheme "dev", "p#", "preview#", "rc#" all about?</b><br />
<br />
I just wanted to install the latest stable version.<br />
<br />
After some investigation, I brought light into the dark: <br />
<br />
<table border="1" style="width: 80%px;"><tbody>
<tr> <th>name-suffix</th> <th>meaning</th> </tr>
<tr> <td>dev#</td> <td>development branch</td> </tr>
<tr> <td>p#</td> <td>stable version at patch level #</td> </tr>
<tr> <td>preview#</td> <td>preview version no #</td> </tr>
<tr> <td>rc#</td> <td>release candidate no #</td></tr>
</tbody></table><br />
So, I finally decided to pick version 2.0.0-p247:<br />
<br />
<div class="bash">$ rbenv install 2.0.0-p247<br />
</div><br />
Surprisingly, the command started to install openssl first on my mac.<br />
<br />
<div class="code gray-box">Downloading openssl-1.0.1e.tar.gz...<br />
-> https://www.openssl.org/source/openssl-1.0.1e.tar.gz<br />
Installing openssl-1.0.1e...<br />
Installed openssl-1.0.1e to /Users/cschmidt/.rbenv/versions/2.0.0-p247<br />
</div><br />
<b>This unfortunately was not mentioned in any place of the instructions</b>, but gladly it is just installed within the "<code>~/.rbenv</code>" directory itself and therefore <b>does not mess up the system</b>. Finally I got ...<br />
<br />
<div class="code gray-box">Downloading ruby-2.0.0-p247.tar.gz... -> http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p247.tar.gz Installing ruby-2.0.0-p247... Installed ruby-2.0.0-p247 to /Users/cschmidt/.rbenv/versions/2.0.0-p247</div><br />
Nicely<b> ruby 2.0.0 can be installed using the clang compiler</b> and does not need to have gcc installed on your Mac, as it's predecessor versions e.g. "ruby 1.9.2"<br />
<br />
So, I assume, that ruby 2.0 is installed now. Let's do a last check ...<br />
<br />
<div class="bash">$ ruby -v</div><br />
<div class="code gray-box">ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0]</div><br />
Aha, ... ok, there seems to be some configuration left.<br />
<br />
<h2>Configuration</h2><br />
As I wanted to do this just quick and system wide<br />
<br />
<div class="bash">$ rbenv global 2.0.0-p247</code></div><br />
Check again ...<br />
<br />
<div class="bash">$ ruby -v</div><br />
<div class="code gray-box">ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-darwin12.4.0]</div><br />
Finally done.<br />
Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com6tag:blogger.com,1999:blog-2525946083367405222.post-14137084337296782402013-08-02T00:57:00.000+02:002014-12-24T03:00:40.460+01:00How to emulate Java "synchronized" keyword in C++<h2>Objective</h2><br />
The objective of this article is to show how to provide the keyword '<b>synchronized</b>' in C++ that works like the well-known '<b>synchronized</b>' keyword in Java for locking and unlocking blocks of code.<br />
<br />
<br />
<u>Example:</u><br />
Imagine you have a FIFO queue were you can add items from from different producer threads and there is another consumer thread that picks up the items for processing.<br />
<br />
<br />
<code>Listing 1:</code><br />
<br />
<div class="code gray-box"><pre>01. public class ItemProcessor {
02. private ArrayList<Item> queue = new ArrayList<Item>();
03.
04. public void putItem(Item item) {
<span style="color: red;">05. synchronized(this) {</span>
06. queue.add(item);
<span style="color: red;">07. }</span>
08. }
09.
10. public int processItem() {
11. Item item = null;
<span style="color: red;">12. synchronized(this) {</span>
13. if (queue.isEmpty()) {
14. return 0;
15. } else {
16. item = queue.(0);
17. queue.remove(0);
18. }
<span style="color: red;">19. }</span>
20. return processItem(item);
21. }
22.
23. private int processItem(Item item) {
24. int result = 0;
25. // ... do something with item and set result
26. return result;
27. }
28. }</pre></div><br />
In Java you can clearly see which parts of the code are synchronized and which are not.<br />
<br />
<h2>Motivation</h2><br />
During the years working as a professional Software Engineer I saw a lot of C++ code that uses spinlocks or mutexes that have to be locked and unlocked manually. Even in situations where the scope is very clear and stays within a single method or function this could be very error prune.<br />
<br />
As most of you know, it's very important to have the lock and unlock calls balanced to prevent deadlocks or race-conditions.<br />
<br />
Let's look especially at the more complicated method '<code>processItem</code>' and how it would look like in C++ with the manual locking and unlocking.<br />
<br />
<code>Listing 2:</code><br />
<br />
<div class="code gray-box"><pre>01. class ItemProcessor {
02. private:
03. Mutex mutex_;
04. std::vector<Item*> queue_;
05.
06. public:
07. void putItem(Item* item) {
<span style="color: red;">08. mutex_.lock();</span>
09. queue_.push_back(item);
<span style="color: red;">10. mutex_.unlock();</span>
11. }
12.
13. int processItem() {
14. Item* item = NULL;
<span style="color: red;">15. mutex_.lock();</span>
16. if (queue_.empty()) {
<span style="color: red;"><b>17. mutex_.unlock(); // If missing -> Candidate for a deadlock!</b></span>
<span style="color: blue;">18. return 0;</span>
19. } else {
20. item = queue_[0];
21. queue_.erase(queue_.begin());
22. }
<span style="color: red;">23. mutex_.unlock();</span>
24. return this->processItem(item);
25. }
26.
27. private:
28. int processItem(Item* item) {
29. int result = 0;
30. // ... do something with item and set result
31. return result;
32. }
33. }</pre></div><br />
As you can see, you need the, likely to be forgotten, unlock statement in line 17, to correctly balance your locks and unlocks.<br />
<br />
<br />
As I am not only a C++ programmer, but also used Java very intensively in the past years, I was always very attracted to see how simple it is to work with synchronized blocks in Java compared to the inconvenient usage of manual locks and unlocks in C++ or Objective-C (Objective-C 2.0 has also "@synchronized" now).<br />
<br />
Therefore I thought about a way how to extend C++ by a proper '<b>synchronized</b>' keyword that is <b>semantically</b> <b>equal</b> to synchronized in Java. Furthermore I was curious if there is even a way to also achieve <b>syntactical</b> <b>equality</b>.<br />
<br />
<h2>Evolvement of a solution</h2><br />
My first approach is to implement a template class called <code>SynchronizedBlock</code>, that takes a lock class <code>L</code> as a parameter during template initialization. The specific type of the lock class does not matter and it can be either a spinlock or a mutex as long as it provides the instance methods '<code>void lock()</code>' and '<code>void unlock()</code>'.<br />
<br />
The template class looks like:<br />
<br />
<code>Listing 3:</code><br />
<br />
<div class="code gray-box"><pre>01. template<typename L> class SynchronizedBlock {
02. public:
03. SynchronizedBlock(L& lock) : lock_(lock) {
04. lock_.lock();
05. };
05. ~SynchronizedBlock() {
06. lock_.unlock();
07. }
08. private:
09. lock_;
10. };</pre></div><br />
If you modify the C++ implementation of the code in Listing 2 and use the new template class <code>SynchronizedBlock</code> as a helper for locking and unlocking from Listing 3, your doing nothing else than following the famous <b>Resource Acquisition is initialization (RAII) </b>pattern invented by B. Stroustrup.<br />
<br />
<code>Listing 4:</code><br />
<br />
<div class="code gray-box"><pre>01. //...
02. int processItem() {
03. Item* item = NULL;
<span style="color: blue;"><b>04. {</b></span>
<span style="color: red;">05. SynchronizedBlock<mutex> block(mutex_);</mutex></span>
06. if (queue_.empty()) {
07. return 0;
08. } else {
09. item = queue_[0];
11. queue_.erase(queue_.begin());
12. }
<b><span style="color: blue;">13. }</span></b>
14. return this->processItem(item);
15. }
16. //...</pre></div><br />
As you can see in Listing 4, the error prune line 17 as shown in Listing 2 is gone. Manual unlocking is not necessary, because the lock for the mutex of our local variable <code>block</code> is called within the constructor and unlock is called during destruction of <code>block</code>. In this case <code>block</code> is destructed either in line 7 or in line 13 (when the closing blue curly bracket is reached). To make this automatism happen, it is important to create the variable <code>block</code> on the stack and not on the heap (as you can see, there is no '<code>SynchronizedBlock*<mutex> block = new SynchronizedBlock(mutex_);</mutex></code>' statement). The second important thing you'll notice is that it's necessary to introduce the blue scope brackets to ensure the correct lifetime for our <code>block</code> variable. The destructor must be called before line 14 to ensure semantical equality to the original implementation in Listing 2.<br />
<br />
Needing the blue curly brackets is the point that was still bugging me. In contrary to the Java 'synchronized' I have to define the lifetime scope of my synchronized block manually before I can create the <code>block</code> variable. Something, which makes the usage still a bit inconvenient.<br />
<br />
<h2>Improvement of the syntax</h2><br />
To get rid of this syntactical flaw that is needed to ensure semantical correctness I remembered that C++ allows to declare variables within a for-loop followed by curly brackets for the loop body. I thought, could use this fact to my advantage e.g.<br />
<br />
<code>Listing 5:</code><br />
<br />
<div class="code gray-box"><pre>01. //...
02. for (<span style="color: blue;"><b>int i=0, c=5</b></span>; i<c; ++1) {
03. // do something
04. }
05. //...</pre></div><br />
The lifetime of the variables "i" and "c" are exactly until line 4 of Listing 5.<br />
<br />
To achieve my goal to have a convenient syntax for synchronized block in C++ that should look like<br />
<br />
<code>Listing 6:</code><br />
<br />
<div class="code gray-box"><pre>01. synchronized(mutex) {
02. // do something </code>
03. }</pre></div><br />
it needed another helper template class called '<code>SynchronizeGuard</code>' with the following implementation:<br />
<br />
<code>Listing 7:</code><br />
<br />
<div class="code gray-box"><pre>01. template<typename L> class cSynchronizeGuard {
02. public:
03. cSynchronizeGuard(L& lock);
04. ~cSynchronizeGuard();
05. bool isLocked() const;
06. void lock();
07. void unlock();
08. private:
09. L& lock_;
10. volatile bool state_;
11. };
12.
13. inline cSynchronizeGuard::cSynchronizeGuard(L& lock)
14. : lock_(lock), state_(false) {
15. this->lock();
16. }
17.
18. inline cSynchronizeGuard::~cSynchronizeGuard() {
19. if(state_)
20. lock_.unlock();
21. }
22.
23. inline bool cSynchronizeGuard::isLocked() const {
24. return state_;
25. }
26.
27. inline void cSynchronizeGuard::lock() {
28. lock_.lock();
29. state_ = true;
30. }
31.
32. inline void cSynchronizeGuard::unlock() {
33. lock_.unlock();
34. state_ = false;
35. }</pre></div><br />
With the help of the template class <code>SynchronizeGuard</code> and the knowledge about the scope of variables in a for-loop you can express the scope for the block that should be synchronized as follows (again taking the method '<code>processItem'</code> from Listing 4 as an example):<br />
<br />
<code>Listing 8:</code><br />
<br />
<div class="code gray-box"><pre>01. //...
02. int processItem() {
03. Item* item = NULL;
<span style="color: red;">04. for (SynchronizeGuard guard(mutex_); guard.isLocked(); guard.unlock())</span>
<span style="color: blue;"><b>05. {</b></span>
06. if (queue_.empty()) {
07. return 0;
08. } else {
09. item = queue_[0];
10. queue_.erase(queue_.begin());
11. }
<b><span style="color: blue;">12. }</span></b>
13. return this->processItem(item);
14. }
15. //...</pre></div><br />
Now, besides the fact, that the synchronization code and the blue curly brackets now have the correct order, this statement seems to be much more inconvenient to write than the code in Listing 4.<br />
<br />
Hold on, even not recommended to be overused in C++, we have still the Preprocessor. Let's put that nasty line 4 from Listing 8 into a macro:<br />
<br />
<code>Listing 9:</code><br />
<br />
<div class="code gray-box"><pre>01. #define synchronized(lock) \
02. if(false) {} \
03. else \
04. for (SynchronizeGuard guard(lock); guard.isLocked(); guard.unlock())</pre></div><br />
Et voilà! Putting all together, the beautified implementation from Listing 2 can be expressed in C++ very similar to the code written in Java (Listing 1):<br />
<br />
<code>Listing 10:</code><br />
<br />
<div class="code gray-box"><pre>01. class ItemProcessor {
02. private:
03. Mutex mutex_;
04. std::vector<Item*> queue_;
05.
06. public:
07. void putItem(Item* item) {
<span style="color: red;">08. synchronized(mutex_) {</span>
09. queue_.push_back(item);
<span style="color: red;">10. }</span>
11. }
12.
13. int processItem() {
14. Item* item = NULL;</code>
<span style="color: red;">15. synchronized(mutex_) {</span>
16. if (queue_.empty()) {
17. return 0;
18. } else {
19. item = queue_[0];
20. queue_.erase(queue_.begin());
21. }
<span style="color: red;">22. }</span>
23. return this->processItem(item);
24. }
25. // ...
26. };</pre></div>Christian Schmidthttp://www.blogger.com/profile/01625744616539341979noreply@blogger.com5Regensburg, Deutschland49.0145423 12.10085589999994248.8479368 11.778132399999942 49.181147800000005 12.423579399999943