sakana http://skyhigh71.github.io/ very short memo en-us Sun, 22 Dec 2013 00:00:00 +0900 http://skyhigh71.github.io/2013/12/22/ipython_setup_on_windows.html http://skyhigh71.github.io/2013/12/22/ipython_setup_on_windows.html <![CDATA[ipython setup on windows]]>

ipython setup on windows

Your life on windows platform will be much more convenient with ipython. Here is a short memo to setup ipython with enhanced console.

install python

Download current python installer. Python 3.3.1 seems to be one for now.

PATH environment variable

Append “C:Python33” to the end of PATH environment variable, so that one call python.

easy_install & pip

Obtain install script of easy_install and launch it for installation.

$ python ez_setup.py

Append script folder, “C:Python33Scripts” to PATH environment variable as well. Now you can call easy_install.

Then install pip by easy_install.

$ easy_install pip

pyreadline

ipython requires pyreadline package, so install pyreadline.

$ pip install pyreadline

ipython

Now it’s ready for ipython installation.

$ pip install ipython

console enhancement

Install console enhancement. You can obtain current binary from project page.

Put it under, say, C:Console2 folder. And add this path to PATH environment variable so that you can start by calling “console”.

]]>
Sun, 22 Dec 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/12/08/public_key_authentication.html http://skyhigh71.github.io/2013/12/08/public_key_authentication.html <![CDATA[public key authentication]]>

public key authentication

Sometimes you would like to login remote system without password authentication. You can make use of public key authentication for this sake.

Here is a short memo to set up local & remote system.

Let us generate rsa key in case that you have none on localhost.

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa): <ENTER>
Enter passphrase (empty for no passphrase): <ENTER>
Enter same passphrase again: <ENTER>
Your identification has been saved in ~/.ssh/id_rsa.
Your public key has been saved in ~/.ssh/id_rsa.pub.
The key fingerprint is:
ab:6b:44:71:2f:48:22:09:85:d2:1b:18:a5:93:03:97 USER@HOST

As seen, private key (id_rsa) and public key (id_rsa.pub) is generated respectively.

$ file .ssh/*
.ssh/id_rsa:      PEM RSA private key
.ssh/id_rsa.pub:  OpenSSH RSA public key
.ssh/known_hosts: ASCII text

Now create .ssh directory on target host.

$ ssh USER@TARGET_HOST mkdir -p .ssh

Copy public key of localhost onto remote target host and save it as authorized_keys so that remote host shall recognize localhost as authorized one.

$ cat .ssh/id_rsa.pub |ssh USER@TARGET_HOST 'cat >> .ssh/authorized_keys'

Now you can connect target host without password. Here is an excerpt from debug message yielded by ssh command!

debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: ~/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to TARGET_HOST ([TARGET_HOST]:22).
]]>
Sun, 08 Dec 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/12/05/automatic_upgrade.html http://skyhigh71.github.io/2013/12/05/automatic_upgrade.html <![CDATA[automatic upgrade]]>

automatic upgrade

Sometimes you would like to configure your ubuntu host to upgrade automatically without any manual intervention.

You can achieve this by unattended-upgrades package. Here is a short memo to do so.

Install unattended-upgrades package (if not there).

$ sudo apt-get install unattended-upgrades

Enable automatic upgrade.

$ sudo dpkg-reconfigure unattended-upgrades

You will be prompted and see message like this.

Applying updates on a frequent basis is an important part of keeping systems secure. By default, updates need to be applied manually using package management tools. Alternatively, you can choose to have this system automatically download and install security updates.
Automatically download and install stable updates?

After enabling unattended-upgrades, you will have configuration like this.

$ more /etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

According to history records yielded in /var/log/apt/history.log, it seems that automatic update may take place around 08:00.

Start-Date: 2013-12-05  07:58:10
Install: linux-headers-3.11.0-14-generic:amd64 (3.11.0-14.21, automatic), linux-headers-3.11.0-14:amd64 (3.11.0-14.21, automatic), linux-image-extra-3.11.0-14-generic:amd64 (3.11.0-14.21, automatic), linux-image-3.11.0-14-generic:amd64 (3.11.0-14.21, automatic)
End-Date: 2013-12-05  08:00:54
]]>
Thu, 05 Dec 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/12/01/vmplayer_on_ubuntu_13_10.html http://skyhigh71.github.io/2013/12/01/vmplayer_on_ubuntu_13_10.html <![CDATA[vmplayer on ubuntu 13.10]]>

vmplayer on ubuntu 13.10

I have recently upgrade ubuntu machine from 13.04 to 13.10. (As has been anticipated,) vmplayer would not start up stating that it requires to build.

Ok, let us build as we have been doing for a while upon upgrade of Kernel.

$ uname -r
3.11.0-13-generic
$ sudo ln -s /usr/src/linux-headers-`uname -r`/include/generated/uapi/linux/version.h /usr/src/linux-headers-`uname -r`/include/linux/version.h
$ sudo vmware-modconfig --console --install-all

Mmmmm, something strange. Build would not succeed and abort in the middle.

Googling with this symptom and found similar report.

Ok, I need to patch some code change before building vmplayer module. Here is what I followed.

Download following files at first.

  • procfs.patch
  • vmblock.3.10.patch
  • vmblock.3.11.patch

And then,

$ cd /usr/lib/vmware/modules/source
$ sudo tar -xf vmnet.tar
$ sudo tar -xf vmblock.tar
$ cd vmnet-only
$ sudo patch -p1 < ~/Downloads/procfs.patch
$ cd ../vmblock-only
$ sudo patch -p1 < ~/Downloads/vmblock.3.10.patch
$ sudo patch -p1 < ~/Downloads/vmblock.3.11.patch
$ cd ..
$ sudo tar -cf vmblock.tar vmblock-only
$ tar -cf vmnet.tar vmnet-only
$ vmware-modconfig --console --install-all

Ok, now my vmplayer comes up without any problem. And original issue has been reported in vm communities.

]]>
Sun, 01 Dec 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/11/24/cifs_continued.html http://skyhigh71.github.io/2013/11/24/cifs_continued.html <![CDATA[cifs continued]]>

cifs continued

In addition to previous post as to CIFS.

You may encounter “permission denied” error upon adding or modifying files.

You can avoid this error by adding following options in /etc/fstab.

uid=<user>,gid=<group>

Replace each value respectively by your own user and default group.

//<MS_HOST>/<ShareName>  /windows/server1  cifs credentials=<HOME>/cifs/passwd,uid=<user>,gid=<group>,iocharset=utf8,sec=ntlm  0  0
]]>
Sun, 24 Nov 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/11/24/root_your_android.html http://skyhigh71.github.io/2013/11/24/root_your_android.html <![CDATA[root your android]]>

root your android

Sometimes you would like to obtain root privilege on android device. For example, in case that you install applications, which requires root privilege (e.g. bash).

I hereby share with you a quick procedure to do so.

Please be note that unlocking boot loader will lose warrant of your device. I remind you that this procedure shall be done totally under your own risk :-)

backup

You have to backup your precious data of device before you start. Take your time so that you will not lose your treasure.

unlock boot loader

Shutdown your device and boot into boot loader mode by pressing “-” button upon start-up. And then connect android device to your ubuntu and execute following command.

$ sudo fastboot oem unlock

This command execution will unlock boot loader. And lock state will change from “locked” to “unlocked”.

install root application

Download CFA-Auto-Root package, which matches your device and install it.

For example,

$ fastboot boot image/CF-Auto-Root-<device_name>-nakasi-nexus<*>.img

install application

Now your android device is capable of allowing applications to have root privilege.

For example I have installed following applications (some do not require root) to manipulate android.

  • terminal emulator
  • BusyBox
  • hacker keyboard
  • bash
  • vim_touch

Fine. Now I can play with my android any time I want!

]]>
Sun, 24 Nov 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/11/17/mount_windows_file_system_remotely.html http://skyhigh71.github.io/2013/11/17/mount_windows_file_system_remotely.html <![CDATA[mount windows file system remotely]]>

mount windows file system remotely

Sometime you would like to mount Windows File System from ubuntu so that you are able to Windows files by making use of command line tools of linux.

You can access via CIFS and here is a quick procedure to do so.

Create share

Create File Share on on Windows side, which is the target of reference. You can check share properties by.

$ net share <shareName>

CIFS utility

Then install tools necessary for mounting with CIFS on ubuntu side.

$ sudo apt-get install cifs-utils

mount

Create mount point.

$ sudo mkdir -p /windows/server1

Prepare password file for authentication, say, named “passwd” under ~/cifs directory with content as follows:

username=<MS_UID>
password=<MS_PASSWD>

Add following line in /etc/fstab so that one can mount shared folder from ubuntu

//<MS_HOST>/<ShareName>  /windows/server1  cifs credentials=<HOME>/cifs/passwd,iocharset=utf8,sec=ntlm  0  0

Now it’s ready for mounting.

$ sudo mount -v -a
]]>
Sun, 17 Nov 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/11/12/view_next_line_of_matched_one.html http://skyhigh71.github.io/2013/11/12/view_next_line_of_matched_one.html <![CDATA[view next line of matched word]]>

view next line of matched word

You sometimes would like to show the very next line(s), which contain specific word(s).

I have to confess that I did not know there is a useful option in grep to achieve this sake... I have been using, say, some script to extract lines, which I would like to check.

You can achieve this with -A option. For example, if you would like to show the next line, which contains specific word, then following command give you the answer:

$ grep <WORD> -A 1 <FILE>

If you would like to see more line(s), increase argument after -A option.

So easy...

]]>
Tue, 12 Nov 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/11/11/x_window_trouble.html http://skyhigh71.github.io/2013/11/11/x_window_trouble.html <![CDATA[a X window trouble]]>

a X window trouble

When I connected from localhost to an instance of ubuntu in lxc, I encountered following message upon login via ssh.

X11 forwarding request failed on channel 0

That is, I connected to this host with -X option for the purpose of X forwarding. (so as to use gvim)

$ ssh -X -l ubuntu <lxc_instance_IP>

Let us dig into further by adding verbose option, -v as argument of ssh.

$ ssh -X -v -l ubuntu <lxc_instance_IP>

Then I found following debug message, which states that relevant xauth application does not exist.

debug1: Remote: No xauth program; cannot forward with spoofing.

Ok, let us install xauth on target host side.

$ sudo apt-get install xauth

After installation of xauth, this message never comes up. And I am able to forward X sessions to host without any problem.

]]>
Mon, 11 Nov 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/28/insert_characters_in_multiple_lines.html http://skyhigh71.github.io/2013/10/28/insert_characters_in_multiple_lines.html <![CDATA[insert characters in multiple lines]]>

insert characters in multiple lines

Sometime you would like to insert same characters into multiple lines. I have to confess that I’ve been using sed command to achieve this, say,

$ sed -e "s/^/> /g" in.txt > out.txt

You can do the same within vim.

  • press ctrl + v
  • press shift + i (I)
  • input character(s) of your choice
  • press esc

So easy...

]]>
Mon, 28 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/27/start_mongo.html http://skyhigh71.github.io/2013/10/27/start_mongo.html <![CDATA[start mongo]]>

start mongo

It occurred to me that I want some place to store data like CSV file. Ok, let us use NoSQL like mongodb.

Like other tools, set up process is quite easy for anyone.

Installation requires only one line of command execution. It depends upon existing packages though, it may take time to download all the necessary package.

$ sudo apt-get install mongodb

Installation will automatically starts up daemon, mongod.

$ ps -ef|grep mongod
mongodb    759     1  0 03:47 ?        00:00:27 /usr/bin/mongod --config /etc/mongodb.conf

Configuration file shall be /etc/mongodb.conf. According to configuration file, database repository seems to reside under /var/lib/mongodb directory.

# Where to store the data.
dbpath=/var/lib/mongodb

mongod listesn on default port, TCP 27107.

$ sudo lsof -nPi:27017
COMMAND PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mongod  759 mongodb    6u  IPv4 113195      0t0  TCP 127.0.0.1:27017 (LISTEN)

OK, let us import CSV file into mongodb. You can import CSV file into collection, an quivalent of page in spreadsheet(?), under database.

$ mongoimport -d <DB_NAME> -c <COLLECTION_NAME> --type csv --file <CSV_FILE> --headerline

After installation, you will check its content from interactive shell.

$ mongo
MongoDB shell version: 2.2.4
connecting to: test
> use <DB_NAME>
switched to db salary
> db.<COLLECTION_NAME>.find().pretty()
]]>
Sun, 27 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/19/port_forwarding.html http://skyhigh71.github.io/2013/10/19/port_forwarding.html <![CDATA[port forwarding]]>

port forwarding

Now you have working wiki in lxc network. Let’s configure instances to launch automatically upon system boot and can be accessed from outside network.

lxc Instance be started by creating symbolic link under /etc/lxc/auto directory, which points config file for each instance.

$ sudo ln -s /var/lib/lxc/wiki/config /etc/lxc/auto/wiki.conf
$ sudo ln -s /var/lib/lxc/proxy/config /etc/lxc/auto/proxy.conf

You will see AUTOSTART flag as set to YES.

$ sudo lxc-ls --fancy
NAME   STATE    IPV4       IPV6  AUTOSTART
------------------------------------------
proxy  RUNNING  10.0.3.20  -     YES
wiki   RUNNING  10.0.3.21  -     YES

By default, no one can access from outside.

$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Configure routing tables so that access to TCP port 443 shall be forwarded to reverse proxy instance.

$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to 10.0.3.20:443
$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:https to:10.0.3.20:443

Now you can access wiki by IP address of host.

This change of routing table is not persistent and will be lost upon reboot. So, for example, add such lines as follows in /etc/rc.local file.

NETWORK_IF=eth0
WIKI_IP=10.0.3.20
WIKI_PORT=443

iptables -t nat -A PREROUTING -i $NETWORK_IF -p tcp --dport $WIKI_PORT -j DNAT --to $WIKI_IP:$WIKI_PORT
]]>
Sat, 19 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/19/caching_content_on_nginx.html http://skyhigh71.github.io/2013/10/19/caching_content_on_nginx.html <![CDATA[caching content on nginx]]>

caching content on nginx

You can cache content on front-end reverse proxy so as to provide content more effectively for requests and reduce traffic against back-end web server.

Let us get started.

nginx.conf

Key point here is that you should insert cache configuration line before includes so that it should be evaluated beforehand.

http {

    ##
    # cache config
    ##
    proxy_cache_path /tmp/nginx/cache levels=1 keys_zone=wiki:4m inactive=1h max_size=10m;

    include /etc/nginx/conf.d/\*.conf;
    include /etc/nginx/sites-enabled/\*;
}

With this configuration,

  • actual cache files are stored under /tmp/nginx/cache directory with hierarchy of one
/tmp/nginx/cache
├── 0
│   └── 64b08f8ec9459d892a1a80bea5d2d400
├── 1
│   └── 4bbed59225358625d11842e1ec069b81
├── 2
│   └── 47c32bbfbc8c08c9047c8a8271893f02
  • cache is registered as key, “wiki”, which can have 10MB in size at most
  • cache will be retired after 1 hour

Please make certain that cache directory does exit.

virtual host

A simple setup to store cache for response of status 200, which lasts 10 minutes.

location / {

    proxy_cache       wiki;
    proxy_cache_valid 200 10m;
}
]]>
Sat, 19 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/19/password_authentication_on_nginx.html http://skyhigh71.github.io/2013/10/19/password_authentication_on_nginx.html <![CDATA[password authentication on nginx]]>

password authentication on nginx

I hereby demonstrate a simple procedure to set password authentication against content on nginx, which is quite similar to that of apache.

First, you need to install apache2-utils for the purpose of creating password file.

$ sudo apt-get install apache2-utils

Create password file with htpasswd command.

$ sudo htpasswd -c /home/ubuntu/wiki/.htpasswd lupin

Incorporate newly created password file into virtual host configuration.

location / {
    auth_basic "Open Sesame!";
    auth_basic_user_file /home/ubuntu/wiki/.htpasswd;
}

Reflect modified configuration into running nginx process.

$ sudo /etc/init.d/nginx reload
]]>
Sat, 19 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/17/enable_ssl_on_nginx.html http://skyhigh71.github.io/2013/10/17/enable_ssl_on_nginx.html <![CDATA[enable encryption on nginx]]>

enable encryption on nginx

Sometimes you would like to configure nginx instance to communicate with clients in secured connection. You can issue self signed certificate and make use of it for encryption.

Here is a quick procedure to do it.

issue certificate

Create a directory to store key and certificate.

$ cd /etc/nginx
$ sudo mkdir cert
$ cd cert

Install openssl package if not there. And create private key named “server.key”.

$ sudo apt-get install openssl
$ sudo openssl genrsa -des3 -out server.key 1024
Generating RSA private key, 1024 bit long modulus
......................................++++++
............................++++++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:
$ file server.key
server.key: PEM RSA private key

And then create CSR named “server.csr”. Answer as appropriate to inquiries given by command.

$ sudo openssl req -new -key server.key -out server.csr
$ file server.csr
server.csr: PEM certificate request

Rename private key and request certificate against forged CA.

$ sudo cp server.key server.key.org
$ sudo openssl rsa -in server.key.org -out server.key

Finally issue certificate named, “server.cert”.

$ sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.cert
$ file server.cert
server.cert: PEM certificate

Now you have your own server certificate for your encryption.

enable SSL

Incorporate server certificate into nginx’s configuration by pointing its location.

server {

    listen 443 ssl;

    ssl on;
    ssl_certificate     cert/server.cert;
    ssl_certificate_key cert/server.key;

}

Now you see that nginx listens on HTTPS port.

$ sudo lsof -nPi:443
COMMAND PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   385     root    8u  IPv4 127726      0t0  TCP \*:443 (LISTEN)
nginx   387 www-data    8u  IPv4 127726      0t0  TCP \*:443 (LISTEN)
]]>
Thu, 17 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/14/reverse_proxy.html http://skyhigh71.github.io/2013/10/14/reverse_proxy.html <![CDATA[reverse proxy]]>

reverse proxy

Create an instance of plain reverse proxy instance with nginx. It’s quite simple.

Create an instance of ubuntu container as usual. Install nginx and add virtual host entry, say, “reverse” under /etc/nginx/sites-available.

server {
    listen 80;

    access_log  /home/ubuntu/reverse/logs/access.log;
    error_log   /home/ubuntu/reverse/logs/error.log debug;

    location / {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
        proxy_pass http://10.0.3.21:80/;
    }
}

Create symbolic link under /etc/nginx/sites-enabled directory to point it. And remove default symbolic link.

Make certain that you create log directory for nginx instance.

$ cd
$ mkdir -p reverse/logs

That’s all

]]>
Mon, 14 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/14/quick_wiki_configuration.html http://skyhigh71.github.io/2013/10/14/quick_wiki_configuration.html <![CDATA[quick wiki configuration]]>

quick wiki configuration

Now you have container to store your content. So let us setup sphinx so that we can deploy our contents onto it.

installation

Fist, install make command & pip.

$ sudo apt-get install build-essential
$ sudo apt-get install python-setuptools
$ sudo easy_install pip

And then sphinx, extension & fonts.

$ sudo apt-get install python-sphinx
$ sudo pip install sphinxcontrib-blockdiag sphinxcontrib-nwdiag sphinxcontrib-seqdiag sphinxcontrib-actdiag
$ sudo apt-get install python-matplotlib
$ sudo apt-get install ttf-dejavu

configuration

Configure project configuration file to reflect extension.

#extensions = []
extensions = ['sphinxcontrib.blockdiag',
              'sphinxcontrib.nwdiag',
              'sphinxcontrib.seqdiag',
              'sphinxcontrib.actdiag']
blockdiag_fontpath = '/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono.ttf'
nwdiag_fontpath    = '/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono.ttf'
seqdiag_fontpath   = '/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono.ttf'
actdiag_fontpath   = '/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono.ttf'

automatic build

Ok, now let us configure system to build html automatically upon any modification under source directory.

Fist install inotify-tools to monitor file system changes.

$ sudo apt-get install inotify-tools

Create a script to monitor changes and launch sphinx-build upon detection. Save this script as build.sh under $HOME/script directory.

#!/usr/bin/env bash

ROOT=/home/ubuntu/wiki
WATCH=source

cd $ROOT

while inotifywait -r -e create,modify,delete $WATCH; do
    sphinx-build -b html -d build/doctrees source build/html
done

So as to launch this script upon system boot, create such a wrapper script as following under /etc/profile.d directory.

#!/usr/bin/env bash

nohup /home/ubuntu/script/build.sh 1> /dev/null 2>&1 &
]]>
Mon, 14 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/12/web_server_in_lxc.html http://skyhigh71.github.io/2013/10/12/web_server_in_lxc.html <![CDATA[static web server in lxc]]>

static web server in lxc

Let us create web site in lxc environment. That is, with rough design as follows.

design

  • network

None

  • traffic flow

None

web server

Create an instance of ubuntu under the name of “web”.

$ sudo lxc-create -n web -t ubuntu
$ sudo lxc-start -n web -d
$ ssh -l ubuntu `cut -d " " -f3 /var/lib/misc/dnsmasq.lxcbr0.leases`

Let us configure timezone to JST and network interface to static one.

$ sudo ln -sf /usr/share/zoneinfo/Asia/Tokyo /etc/localtime
$ tail -8 /etc/network/interfaces
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 10.0.3.21
network 10.0.3.0
netmask 255.255.255.0
broadcast 10.0.3.255
gateway 10.0.3.1

nginx

Now install and configure nginx as container of static content.

$ sudo apt-get install nginx

Configuration files related to nginx are deployed under /etc/nginx directory.

$ tree
.
├── conf.d
├── fastcgi_params
├── koi-utf
├── koi-win
├── mime.types
├── naxsi_core.rules
├── naxsi.rules
├── nginx.conf
├── proxy_params
├── scgi_params
├── sites-available
│   └── default
├── sites-enabled
│   └── default -> /etc/nginx/sites-available/default
├── uwsgi_params
└── win-utf

Decrease number of worker process.

$ diff nginx.conf nginx.conf.org
2c2
< worker_processes 1;
---
> worker_processes 4;

(It may not be necessary though,) configure nginx so as to start upon boot time.

$ update-rc.d nginx defaults

virtual host

Create a virtual host configuration file, say, “wiki” under /etc/nginx/sites-available directory.

server {
    listen 80;

    access_log  /home/ubuntu/wiki/logs/access.log;
    error_log   /home/ubuntu/wiki/logs/error.log debug;

    location / {
        root   /home/ubuntu/wiki/build/html/;
        index  index.html;
    }
}

Make a symbolic link from under /etc/nginx/sites-enabled directory. And remove default configuration if not necessary.

$ sudo ln -s /etc/nginx/sites-available/wiki .
$ rm -i default
]]>
Sat, 12 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/09/search_for_package.html http://skyhigh71.github.io/2013/10/09/search_for_package.html <![CDATA[search for package]]>

search for package

You may encounter such a situation that some shared libraries are stripped when you debug core file. For example,

(gdb) info shared
From        To          Syms Read   Shared Object Library
0xb778f430  0xb77a6054  Yes (*)     /lib/i386-linux-gnu/libncurses.so.5
0xb75ef350  0xb7728f4e  Yes         /lib/i386-linux-gnu/libc.so.6
0xb75d2ad0  0xb75d3a9c  Yes         /lib/i386-linux-gnu/libdl.so.2
0xb75b87d0  0xb75c2c24  Yes (*)     /lib/i386-linux-gnu/libtinfo.so.5
0xb77c9830  0xb77e1e7c  Yes         /lib/ld-linux.so.2
(*): Shared library is missing debugging information.

You would like to search for package, which relevant shared object is included. With library, which contains symbol table and you can examine more closely core file.

install

You need to have apt-file command for searching packages. Let’s install apt-file, as it is not installed by default.

$ sudo apt-get install apt-file

update cache

So as to search for packages, you need to have local cache. Let’s download cache from repository and update cache.

$ apt-file update

search & install

Now you are ready for searching packages. Take sample quoted above and search for “libncurses.so.5”.

$ apt-file search libncurses.so.5
lib64ncurses5: /lib64/libncurses.so.5
lib64ncurses5: /lib64/libncurses.so.5.9
libncurses5: /lib/i386-linux-gnu/libncurses.so.5
libncurses5: /lib/i386-linux-gnu/libncurses.so.5.9
libncurses5-dbg: /usr/lib/debug/lib/i386-linux-gnu/libncurses.so.5.9
libncurses5-dbg: /usr/lib/debug/lib64/libncurses.so.5.9
libncurses5-dbg: /usr/lib/debug/libncurses.so.5
libncurses5-dbg: /usr/lib/debug/libncurses.so.5.9
libncurses5-dbg: /usr/lib/debug/usr/libx32/libncurses.so.5.9
libx32ncurses5: /usr/libx32/libncurses.so.5
libx32ncurses5: /usr/libx32/libncurses.so.5.9

libncurses5-dbg must be our target package. Ok, let’s install it.

$ sudo apt-get install libncurses5-dbg

Now you can see stacks with function names and arguments. And checking status of shared libraries, you will confirm libraries not stripped.

(gdb) info shared
From        To          Syms Read   Shared Object Library
0xb778f430  0xb77a6054  Yes         /lib/i386-linux-gnu/libncurses.so.5
0xb75ef350  0xb7728f4e  Yes         /lib/i386-linux-gnu/libc.so.6
0xb75d2ad0  0xb75d3a9c  Yes         /lib/i386-linux-gnu/libdl.so.2
0xb75b87d0  0xb75c2c24  Yes         /lib/i386-linux-gnu/libtinfo.so.5
0xb77c9830  0xb77e1e7c  Yes         /lib/ld-linux.so.2
]]>
Wed, 09 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/08/resize_disk_size_of_guest_on_kvm.html http://skyhigh71.github.io/2013/10/08/resize_disk_size_of_guest_on_kvm.html <![CDATA[resize disk size of guest on KVM]]>

resize disk size of guest on KVM

Sometimes you regret that you created guest OS with too small disk space. sigh... And you would like to increase disk space a little.

You can do it with some steps and I hereby quote a sample procedure of such operation. In this sample, OS of guest instance is Windows 7.

shutdown

Shutdown guest OS before start operation.

resize image file

Guest OS’s file system is represented in image file (*.img), which reflect its size.

$ du -sh /var/lib/libvirt/images/win7.img
21G /var/lib/libvirt/images/win7.img

Let us manipulate image file by qemu-img command. In this sample, 20GB will be added to existing file.

$ sudo qemu-img resize /var/lib/libvirt/images/win7.img +20G
Image resized.

resize partition

Now you need to resize existing partition to grow as much as you increased. For that sake, let us make use of gnome partition editor (gparted).

Download ISO file of gparted and connect it to guest instance. And boot from gparted. You will select graphical user interface of gparted and increase partition size easily.

boot

Now you can boot with guest OS and see that disk size does increase.

]]>
Tue, 08 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/06/plotting.html http://skyhigh71.github.io/2013/10/06/plotting.html <![CDATA[plotting]]>

plotting

Sometimes you would like to create a plot out of some text file. You can achieve it by making use of matplotlib.

I have to confess that I am not quite accustomed with matplotlib though, I hereby demonstrate a sample.

installation

Installation is quite simple and one line command suffices.

$ sudo apt-get install python-matplotlib

sample

Suppose such a CSV file as follows.

201201,1477,994
201202,1462,988
201203,1437,985

That is, CSV file contains residential loan prepayment history. lines are consisted of,

  • month of prepayment
  • monthly payment
  • bonus payment

plot

You can plot a graph, for example, as follows.

import csv
from datetime import datetime
from matplotlib import pyplot as plt
from matplotlib import dates

if __name__ == "__main__":

    raw_data = "sample.csv"
    fmt = ["prepay", "month", "bonus"]

    # prepayment date
    prepay = []
    # lists to store each values
    monthCal = []
    monthAct = []
    year     = []

    with open(raw_data, "rb") as f:
        reader = csv.reader(f)
        for row in reader:
            prepay.append(dates.date2num(datetime.strptime(row[fmt.index("prepay")], "%Y%m")))
            monthCal.append(int(row[fmt.index("month")]) + int(row[fmt.index("bonus")])/6) 
            monthAct.append(int(row[fmt.index("month")]))
            year.append(int(row[fmt.index("month")])*12 + int(row[fmt.index("bonus")])*2) 

    # let's start plotting
    fig = plt.figure()

    # monthly graph
    graph = fig.add_subplot(2, 1, 1)
    graph.plot(prepay, monthCal, "bo", prepay, monthAct, "ro")

    graph.xaxis.set_major_locator(dates.MonthLocator())
    graph.xaxis.set_major_formatter(dates.DateFormatter("%Y/%m"))
    
    plt.xticks(rotation="vertical")
    plt.ylabel("payment per month (USD)")
    plt.grid(True)

    # yearly graph
    graph = fig.add_subplot(2, 1, 2)
    graph.plot(prepay, year, "go")

    graph.xaxis.set_major_locator(dates.MonthLocator())
    graph.xaxis.set_major_formatter(dates.DateFormatter("%Y/%m"))
    
    plt.ylabel("payment per year (USD)")
    plt.xticks(rotation="vertical")
    plt.grid(True)

    plt.show()
    fig.savefig(raw_data.replace("csv", "png"))
../../../_images/sample.png
]]>
Sun, 06 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/05/back_up_files.html http://skyhigh71.github.io/2013/10/05/back_up_files.html <![CDATA[back up files]]>

back up files

Here is another approach for creating back up files onto dropbox on regular basis by cron. A quite simple script though, here are some remark.

hostname

There are multiple approaches for obtaining hostname though, here I use socket.gethostname(), which seems to be a transliteration of gethostname() system call.

compression

Here I use bzip2 as compression algorithm, which seems to achieve more effective compression rate against files like plain text in comparison to other algorithm like gzip. But as a trade off to good compression rate, compression process seems to take longer time.

temporary file

I create backup file first under temporary directory and then move it under directory under dropbox in case that there is/are any change(s). As creating backup file directly under dropbox directory is quite slow due to traffic.

checksum

Finally I compare temporary backup file and existing file by value of MD5 checksum of each file. If they do not match, it indicates that some change(s) of file(s) may have arisen. If they are identical, then there was no change in files under target directory, so do nothing.

import datetime
import hashlib
import os
import os.path
import shutil
import socket
import tarfile

if __name__ == "__main__":

    # base directory, which is one level upper than target directory
    base      = "<BASE_DIRECTORY_OF_YOUR_CHOICE>"
    # backup target directory under base directory
    target    = "<BACK_UP_TARGET_DIRECTORY>"
    # directories to exclude from backup file
    exclude   = ["<DIRECTORY_TO_EXCLUDE>"]

    # backup file name / HOSTNAME_DAY.tar.bz2
    backup    = socket.gethostname() + "_" +\
                datetime.date.today().strftime("%A") +\
                ".tar.bz2"
    # temporary directory & temporary file
    temp_dir  = "/tmp"
    temp_file = os.path.join(temp_dir, backup)
    # destination directory
    dest_dir  = os.path.join(base, "Dropbox/<TARGET_DIRECTORY>")
    dest_file = os.path.join(dest_dir, backup)

    # let's start backup

    # first create a backup file under temporary file
    tar = tarfile.open(temp_file, "w:bz2")

    os.chdir(base)
    for root, dirs, files in os.walk(target):
        for name in files:
            for d in root.split('/'):  
                if d in exclude:
                    break
            else:
                tar.add(os.path.join(root, name))
    
    tar.close()

    # backup file creation has finished
    # now determine if file replacement is required or not
    if os.path.exists(dest_file):
        # dictionary to store md5 checksum of each file
        md5 = {}
        for f in temp_file, dest_file:
            with open(f) as file_to_check:
                data = file_to_check.read()
                md5[f] = hashlib.md5(data).hexdigest()
        # compare checksum value of each file
        # copy temporary file under destination directory
        if md5[temp_file] != md5[dest_file]:
            shutil.copy(temp_file, dest_dir)
    else:
        # somehow backup of same name does not exist, do copy
        shutil.copy(temp_file, dest_dir)

If you would like to take backup, say, every one hour, then add such a line as follows in cron table.

$ crontab -l
0 * * * * python <PATH_TO_SCRIPT>/backup.py
]]>
Sat, 05 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/10/02/x86_or_x64.html http://skyhigh71.github.io/2013/10/02/x86_or_x64.html <![CDATA[x86 or x64?]]>

x86 or x64?

I have to confess that I did not know CPU (AMD E2-1800) of my tiny PC (ThinkPad Edge E135) is x64 capable...

You can check whether your CPU is x64 capable or not by referencing existence of lm flag (long mode).

$ grep ^flags /proc/cpuinfo
flags               : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni monitor ssse3 cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch ibs skinit wdt arat hw_pstate npt lbrv svm_lock nrip_save pausefilter

If there is lm in flags line, then its CPU can be x86_64 architecture. After reinstalling OS, now I have x64 OS at hand!

$ arch
x86_64

So easy... Ok, let’s create an lxc instance on this re-born platform.

Timezone of instance seems to be in EDT, which is not convenient for us. Let us change timezone to JST (Asia/tokyo). You can do it by configuring /etc/localtime file to reference timezone data of Japan.

$ file /usr/share/zoneinfo/Asia/Tokyo
/usr/share/zoneinfo/Asia/Tokyo: timezone data, version 2, 3 gmt time flags, 3 std time flags, no leap seconds, 9 transition times, 3 abbreviation chars

$ sudo ln -sf /usr/share/zoneinfo/Asia/Tokyo /etc/localtime
]]>
Wed, 02 Oct 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/29/static_ip_address_assignment.html http://skyhigh71.github.io/2013/09/29/static_ip_address_assignment.html <![CDATA[static IP address assignment]]>

static IP address assignment

Let us configure network related properties of our DNS Server host.

hostname

hostname of host can be configured by editing /etc/hostname file.

static IP address

By default, hosts are configured to obtain IP address from DHCP Server. Configuration is stored in /etc/network/interfaces file.

auto eth0
iface eth0 inet dhcp

Replace above as follows so that IP address 10.0.3.10 is statically assigned for interface eth0.

auto eth0
iface eth0 inet static
address 10.0.3.10
network 10.0.3.0
netmask 255.255.255.0
broadcast 10.0.3.255
gateway 10.0.3.1

resolver

As advertised in it, /etc/resolv.conf file will be overwritten upon system reboot.

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

You need to configure IP addresses of DNS servers and search domain in /etc/network/interfaces file as well.

dns-search example.org
dns-nameservers 10.0.3.10 10.0.3.1 8.8.8.8

Resolver will generates such lines as follows in /etc/resolv.conf.

nameserver 10.0.3.10
nameserver 10.0.3.1
nameserver 8.8.8.8
search example.org

Following entry is a good reference.

]]>
Sun, 29 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/29/change_ip_address_range.html http://skyhigh71.github.io/2013/09/29/change_ip_address_range.html <![CDATA[change IP address range]]>

change IP address range

As described in previous posts, we have some servers in lxc network now.

I would like that servers shall have static IP addresses. Say, in the range from 10.0.3.2 until 10.0.3.99.

And client shall have dynamically assigned address, which should not conflict with server’s ones. Let’s limit range of IP addresses, which DHCP Server, dnsmasq assigns for.

You can configure IP address range by modifying lxc file under /etc/default directory.

$ diff lxc lxc.org
27,28c27,28
< LXC_DHCP_RANGE="10.0.3.100,10.0.3.199"
< LXC_DHCP_MAX="100"
---
> LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
> LXC_DHCP_MAX="253"

After restart, you will see that dnsmasq will set IP address range as follows:

dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.100,10.0.3.199 --dhcp-lease-max=100 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative
]]>
Sun, 29 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/28/name_server.html http://skyhigh71.github.io/2013/09/28/name_server.html <![CDATA[Name Server]]>

Name Server

Now let’s move on to Name Server. Sometimes you would like to have your own name services to manipulate protocol like SMTP.

I hereby demonstrate procedure to set up BIND (version 9). In this example, we use fictitious domain, example.org.

We disregard redundancy and focus on primary server. That is, we omit secondary server :-).

By the way, following articles are good reference for you.

In this sample, we Ubuntu as platform. And goal is to setup instance as follows.

None

Installation

Installation is quite simple.

$ sudo apt-get install bind9 dnsutils

That’s all.

Server Configurations

Configuration files are stored under /etc/bind directory. You will modify some file and add zone files for forward/reverse lookup.

├── bind.keys
├── db.0
├── db.127
├── db.255
├── db.empty
├── db.local
├── db.root
├── named.conf
├── named.conf.default-zones
├── named.conf.local
├── named.conf.options
├── rndc.key
└── zones.rfc1918

Forward Zone

Append following lines into named.conf.local file so as to specify your domain name and its zone file.

zone "example.org" {
        type master;
    file "/etc/bind/db.example.org";
};

Create zone file, db.example.org, so as to describe host in the domain.

;
; BIND data file for example.org
;
$TTL    604800
@       IN      SOA     example.org. root.example.org. (
                       20130928         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
        IN      A       10.0.3.10
;
@       IN      NS      dns.example.org.
@       IN      A       10.0.3.10
@       IN      AAAA    ::1
dns     IN      A       10.0.3.10
;MTA
        IN      MX 10   mta.example.org.
mta     IN      A       10.0.3.20

Reverse Zone

And now configure for reverse lookup. Same as forward lookup, append following lines into named.conf.local.file.

zone "3.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/db.10";
};

And create a file named db.10 as follows.

;
; BIND reverse data file for network 10.0.3.0
;
$TTL    604800
@       IN      SOA     example.org. root.example.org. (
                       20130928         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      dns.
10      IN      PTR     dns.example.org.
20      IN      PTR     mta.example.org.

Logging

Log message has been and will be your friend for debugging problem. Append following lines to named.conf.local file.

logging {
    channel query.log {
        file "/var/log/query.log";
        severity debug 3;
    };
    category queries { query.log; };
};

And create a log file and change file owner to bind user.

$ sudo touch /var/log/query.log
$ sudo chown bind /var/log/query.log

You will see log messages like this.

client 127.0.0.1#34060 (mta.example.org): query: mta.example.org IN A +E (127.0.0.1)

start service

$ sudo service bind9 restart
]]>
Sat, 28 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/25/directory_server_in_lxc.html http://skyhigh71.github.io/2013/09/25/directory_server_in_lxc.html <![CDATA[Directory Server]]>

Directory Server

Sometimes you may (or may not) want to have a Directory Server at your hand for, say, storing your addresses. Let us try if we can have Directory Server in virtual instance.

In this scenario, we use x86(i686) machine as host (not x64).

Download x86 version of Directory Server installer (V19710-01.zip) from OTN. And deploy it onto target virtual machine.

$ scp V19710-01.zip root@10.0.3.xxx:~/

As virtual machine have very limited number of tools, therefore you need to install basic commands like tar/unzip. Following sample will install Directory Server under /home/dsee7 directory.

$ sudo yum install unzip tar
$ unzip V19710-01.zip
$ tar xfv DSEE.7.0.Linux-X86-zip.tar.gz
$ DSEE_ZIP_Distribution/
$ unzip -d /home sun-dsee7.zip

Ok, installation finished. Now let’s move on to configuration.

Let’s create instance.

$ ./dsadm create /home/dsee7/instance
Choose the Directory Manager password:
Confirm the Directory Manager password:
Use 'dsadm start '/home/dsee7/instance'' to start the instance

Now you may encounter problem that instance start fails.

$ ./dsadm start ../instance/
ERROR<4167> - Startup  - conn=-1 op=-1 msgId=-1 - System error  Load library /home/dsee7/lib/pwdstorage-plugin.so: error /home/dsee7/lib/../lib/private/libfreebl3.so: version 'NSSRAWHASH_3.12.3' not found (required by /lib/libcrypt.so.1)

libcrypt.so is dependent upon NSS.

$ ldd pwdstorage-plugin.so
./pwdstorage-plugin.so: /home/dsee7/lib/./../lib/private/libfreebl3.so: version 'NSSRAWHASH_3.12.3' not found (required by /lib/libcrypt.so.1)

And libfreebl3.so seems not to have it.

$ find / -name libfreebl3.so -ls
27270228  320 -rwxr-xr-x   1 root     root       325256 Aug  7 16:17 /lib/libfreebl3.so
27660408  364 -rwxr-xr-x   1 root     root       372385 Aug 27  2009 /home/dsee7/lib/private/libfreebl3.so
27791820    0 lrwxrwxrwx   1 root     root           23 Sep 22 20:03 /usr/lib/libfreebl3.so -> ../../lib/libfreebl3.so
$ objdump -x /lib/libfreebl3.so |grep NSSRAWHASH_3.12.3
3 0x00 0x04ceacd3 NSSRAWHASH_3.12.3
$ objdump -x  /home/dsee7/lib/private/libfreebl3.so |grep NSSRAWHASH_3.12.3
$ 

As temporary workaround, configure libfreebl3.so to reference one, which OS provides.

$ ls -l libfreebl3.so*
lrwxrwxrwx 1 root root     18 Sep 24 04:37 libfreebl3.so -> /lib/libfreebl3.so
-rwxr-xr-x 1 root root 372385 Aug 27  2009 libfreebl3.so.org

Now you can start daemon.

$ ./dsadm start ../instance/
Directory Server instance '/home/dsee7/instance' started: pid=523

Let’s create a suffix to store entires.

$ ./dsconf create-suffix  "dc=lupin, dc=org"
]]>
Wed, 25 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/23/centos_on_ubuntu.html http://skyhigh71.github.io/2013/09/23/centos_on_ubuntu.html <![CDATA[CentOS on ubuntu]]>

CentOS on ubuntu

Ok, let us create a RedHat clone instance, say, cent OS on ubuntu. So as to manipulate RPM packages, you need to install curl and yum package.

$ sudo apt-get install curl yum

By default, there is no lxc template for Cent OS. You need to deploy lxc template manually. For example,

$ sudo wget -O /usr/share/lxc/templates/lxc-centos https://gist.github.com/hagix9/3514296/raw/7f6bb4e291fad1dad59a49a5c02f78642bb99a45/lxc-centos
$ sudo chmod 755 /usr/share/lxc/templates/lxc-centos

You need some adjustment in case that architecture of you machine is i686.

$ arch
i686
$ diff /usr/share/lxc/templates/lxc-centos /usr/share/lxc/templates/lxc-centos.org
170,171c170
<         #RELEASE_URL="$MIRROR_URL/Packages/centos-release-$release-$releaseminor.el6.centos.10.$arch.rpm"
<         RELEASE_URL="$MIRROR_URL/Packages/centos-release-$release-$releaseminor.el6.centos.10.i686.rpm"
---
>         RELEASE_URL="$MIRROR_URL/Packages/centos-release-$release-$releaseminor.el6.centos.10.$arch.rpm"

Ok, now you are ready to start instance creation.

$ sudo lxc-create -t centos -n centos01

Please be noted that first time execution may take time, as this will download substantial amount of PRM packages.

After successful creation of instance, you can launch it and login via ssh. By the way, default root password is password.

$ sudo lxc-start -n centos01 -d
$ ssh -l root `cut -d " " -f3 /var/lib/misc/dnsmasq.lxcbr0.leases`
]]>
Mon, 23 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/22/lxc_cont.html http://skyhigh71.github.io/2013/09/22/lxc_cont.html <![CDATA[lxc continued]]>

lxc continued

Let us create virtual machine of your choice. Suppose that you would like to create an instance of ubuntu. (Please do not ask me reason why I need to run ubuntu on unbutu :-)).

$ sudo lxc-create -t ubuntu -n ubuntu01

As this command execution will download files, therefore first time execution may take time. So please wait for a while, say, by drinking coffee.

After successful execution, you will have an ubuntu instance named “ubuntu01”.

Ok, let us start lxc as daemon.

$ sudo lxc-start -n ubuntu01 -d

According to dhcp server (dnsmasq), 10.0.3.198 may have been leased.

$ cat /var/lib/misc/dnsmasq.lxcbr0.leases
1379835957 00:16:3e:08:50:7f 10.0.3.198 ubuntu01 *

Let us login via ssh. Default uid/password combination is ubuntu/ubuntu.

$ ssh -l ubuntu 10.0.3.198

Ok, network plan would be like this.

None

]]>
Sun, 22 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/21/lxc_introduction.html http://skyhigh71.github.io/2013/09/21/lxc_introduction.html <![CDATA[lxc introduction]]>

lxc introduction

You may be tempted (or may be not) to create lightweight virtual machine on you host. lxc (LinuX Container) must be one of your options.

Installing lxc package will install all dependent packages.

$ sudo apt-get install lxc

virtual machine is manipulated by lxc- commands.

$ lxc-<TAB><TAB>
lxc-aa-custom-profile  lxc-clone              lxc-execute            lxc-list               lxc-restart            lxc-unfreeze
lxc-attach             lxc-console            lxc-freeze             lxc-ls                 lxc-shutdown           lxc-unshare
lxc-cgroup             lxc-create             lxc-halt               lxc-monitor            lxc-start              lxc-version
lxc-checkconfig        lxc-destroy            lxc-info               lxc-netstat            lxc-start-ephemeral    lxc-wait
lxc-checkpoint         lxc-device             lxc-kill               lxc-ps                 lxc-stop

You will see lxc related configuration files under /etc/init directory.

$ ls /etc/init/lxc*
/etc/init/lxc-instance.conf  /etc/init/lxc-net.conf  /etc/init/lxc.conf

You will notice that new virtual interface is added and activated.

$ ifconfig lxcbr0
lxcbr0    Link encap:Ethernet  HWaddr a6:1a:10:32:67:87
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::a41a:10ff:fe32:6787/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:84 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:13896 (13.8 KB)

And you will see that DHCP server is already running as well.

dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative
]]>
Sat, 21 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/21/library_details.html http://skyhigh71.github.io/2013/09/21/library_details.html <![CDATA[library details]]>

library details

Sometime you feel like (or may not) checking details of certain library on you machine. Suppose that the relevant library may be X related library.

Let us check and search for library name at first. And apt-cache command is your friend.

$ apt-cache search libx11
libx11-6 - X11 client-side library
libx11-6-dbg - X11 client-side library (debug package)

Ok, lib11-6 is what we are looking for. Let us check its content.

$ apt-cache show libx11-6
Package: libx11-6
Priority: standard
Section: libs
Installed-Size: 1489
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian X Strike Force <debian-x@lists.debian.org>
Architecture: i386
Source: libx11
Version: 2:1.5.0-1ubuntu1.1
Depends: libc6 (>= 2.15), libxcb1 (>= 1.2), libx11-data
Pre-Depends: multiarch-support
Filename: pool/main/libx/libx11/libx11-6_1.5.0-1ubuntu1.1_i386.deb
Size: 776896
MD5sum: 3fea2137a989c9cf840c7e0db9b91606
SHA1: 5fcc7ab89b79687fb338e169622a2c57912bcb60
SHA256: 170ce1631c61458216078415ad9d660df98e193c18c4774d45432cb366e64c75
Description-en: X11 client-side library
 This package provides a client interface to the X Window System, otherwise
 known as 'Xlib'.  It provides a complete API for the basic functions of the
 window system.
 .
 More information about X.Org can be found at:
 <URL:http://www.X.org>
 .
 This module can be found at
 git://anongit.freedesktop.org/git/xorg/lib/libX11
Multi-Arch: same
Description-md5: d75c895abf6eca234f7480813aaa95ec
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu
Supported: 9m
Task: standard, kubuntu-active, kubuntu-active, mythbuntu-frontend, mythbuntu-frontend, mythbuntu-desktop, mythbuntu-backend-slave, mythbuntu-backend-slave, mythbuntu-backend-master, mythbuntu-backend-master

You can download source as described.

$ $ git clone git://anongit.freedesktop.org/git/xorg/lib/libX11
]]>
Sat, 21 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/19/network_diagram_in_document.html http://skyhigh71.github.io/2013/09/19/network_diagram_in_document.html <![CDATA[network diagram in document]]>

network diagram in document

Sometimes you would like to draw network diagram and incorporate it into your document.

It’s so easy with sphinx and nwdiag. For example, prepare such a text as follows and save it as, say, net.diag.

nwdiag {
    internet [shape = cloud];
    internet -- router;

    network eva{
        address = "192.168.10.0/24";
        router;
        Nerv   [address = "192.168.10.10"];
        Seele  [address = "192.168.10.11"];
        Gehirn [address = "192.168.10.12"];
    }

    network theLupin{
        address = "192.168.20.0/24";
        router;
        lupin   [address = "192.168.20.10"];
        fujiko  [address = "192.168.20.11"];
        jigen   [address = "192.168.20.12"];
    }
}

And merge this file into your sphinx document with as follows.

.. nwdiag:: net.diag

Then you will see beautiful network diagram as follows.

None

]]>
Thu, 19 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/18/usb_devices.html http://skyhigh71.github.io/2013/09/18/usb_devices.html <![CDATA[USB devices]]>

USB devices

Sometimes you would like to reference USB devices, which are attached to your machine. You can make use of lsusb utility for this sake.

For example,

$ lsusb
Bus 002 Device 003: ID 5986:0299 Acer, Inc
Bus 004 Device 002: ID 0a5c:21f4 Broadcom Corp.
Bus 006 Device 008: ID 0bb4:0ffe HTC (High Tech Computer Corp.) Desire HD (modem mode)

In case that you would like to check further details of specific USB device, for example, Desire, then you can specify use verbose option, -v.

$ sudo lsusb -v -s 008

By the way, lsusb command accesses file under /dev/bus/usb directory, which requires root permission. Therefore if you execute command as user other than root without sudo, then you will see open failure.

open("/dev/bus/usb/006/008", O_RDWR)    = -1 EACCES (Permission denied)
]]>
Wed, 18 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/18/copying_files.html http://skyhigh71.github.io/2013/09/18/copying_files.html <![CDATA[copy intermediate directories]]>

copy intermediate directories

Sometimes you would like to cp files under directories, which does not exist on target directory. Current cp command does support this kind of operation.

Suppose that you have a file, which contains list of files with directory. You can copy files in it, for example, as follows.

$ mkdir <TARGER_DIR>
$ while read LINE
> do
> cp -p --parents $LINE <TAGET_DIR>
> done < <INPUT_FILE>

You will see that directory strctures are automatically created under target directory.

So easy...

Please refer to gnu page for details.

]]>
Wed, 18 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/15/password_lock_of_screensaver.html http://skyhigh71.github.io/2013/09/15/password_lock_of_screensaver.html <![CDATA[password lock of screensaver]]>

password lock of screensaver

By default, gnome-screen runs on ubuntu and password lock is enabled.

$ ps -ef|grep -i screen
lupin   2000     1  0 15:44 ?        00:00:00 /usr/bin/gnome-screensaver --no-daemon

Sometimes you may feel password lock troublesome and would like to disable it. You can achieve it by gsettings command.

Current configuration can be checked,

$ gsettings get org.gnome.desktop.screensaver lock-enabled
true

You can disable password lock by setting lock-enabled value as false,

$ gsettings set org.gnome.desktop.screensaver lock-enabled false

Checking system calls done by this command, screensaver configuration seems to be stored in,

  • /run/user/<UID>/dconf/user (may be for running process)
  • /<HOME>/.config/dconf/user (for permanent configuration)

You can lock screen by “CTRL” + “ALT” + “L”.

]]>
Sun, 15 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/13/source_code_reference.html http://skyhigh71.github.io/2013/09/13/source_code_reference.html <![CDATA[source code reference]]>

source code reference

Now we have clear precise stack. Here is snippet of stack for relevant thread.

(gdb) where
#0  0xb7705424 in __kernel_vsyscall ()
#1  0xb7504b1f in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#2  0xb75080b3 in __GI_abort () at abort.c:90
#3  0xb74fd877 in __assert_fail_base (fmt=0xb385fe90 <Address 0xb385fe90 out of bounds>, assertion=assertion@entry=0xb60f8419 "ret != inval_id",
    file=file@entry=0xb60f838a "../../src/xcb_io.c", line=line@entry=529, function=function@entry=0xb60f849e <__PRETTY_FUNCTION__.14075> "_XAllocID")
    at assert.c:92
#4  0xb74fd927 in __GI___assert_fail (assertion=assertion@entry=0xb60f8419 "ret != inval_id", file=file@entry=0xb60f838a "../../src/xcb_io.c",
    line=line@entry=529, function=function@entry=0xb60f849e <__PRETTY_FUNCTION__.14075> "_XAllocID") at assert.c:101
#5  0xb608149f in _XAllocID (dpy=0xaf43e80) at ../../src/xcb_io.c:529

As seen, some assertion may have failed and program aborted by itself. Before applying debug information, which contains symbol table, frame 5 looked like this.

#5  0xb608149f in _XAllocID () from /usr/lib/i386-linux-gnu/libX11.so.6

So let’s download source code of libX11 for reference.

$ apt-get source libx11

It’s so easy... Source files are downloaded under current working directory.

$ ls
libx11-1.5.0  libx11_1.5.0-1ubuntu1.1.diff.gz  libx11_1.5.0-1ubuntu1.1.dsc  libx11_1.5.0.orig.tar.gz

You can configure gdb to reference source code, for example, by “set substitute-path”. But for this case ”../../src” does not work for me...

So quite primitive way though,

$ cd libx11-1.5.0/
$ mkdir -p hoge/hoge
$ cd hoge/hoge/
$ ls ../../src/xcb_io.c
../../src/xcb_io.c
$ gdb <HOME>/.dropbox-dist/dropbox /var/cores/core.dropbox.2147.*

Now you can see relevant section of code and variables.

(gdb) frame 5
#5  0xb608149f in _XAllocID (dpy=0xaf43e80) at ../../src/xcb_io.c:529
529         dpy->xcb->next_xid = inval_id;
(gdb) list
524 /* _XAllocID - resource ID allocation routine. */
525 XID _XAllocID(Display *dpy)
526 {
527         XID ret = dpy->xcb->next_xid;
528         assert (ret != inval_id);
529         dpy->xcb->next_xid = inval_id;
530         _XSetPrivSyncFunction(dpy);
531         return ret;
532 }
533
(gdb) print inval_id
$1 = 4294967295
(gdb) print dpy->xcb->next_xid
$2 = 4294967295
]]>
Fri, 13 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/12/symbol_table.html http://skyhigh71.github.io/2013/09/12/symbol_table.html <![CDATA[symbol table reference]]>

symbol table reference

Now the time has come that we can debug core file. In this case, dropbox seems to have yielded core file.

$ file /var/cores/core.dropbox.2172.1378900464
/var/cores/core.dropbox.2172.1378900464: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), SVR4-style, from '<HOME>/.dropbox-dist/dropbox'

But some dependent libraries (e.g. libc.so) are stripped and they do not have symbol table.

$ file /lib/i386-linux-gnu/libc-2.17.so
/lib/i386-linux-gnu/libc-2.17.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=0x81a55e819c61f581e6a9179eaf59726dd80aea31, for GNU/Linux 2.6.24, stripped
$ objdump -t /lib/i386-linux-gnu/libc-2.17.so

/lib/i386-linux-gnu/libc-2.17.so:     file format elf32-i386

SYMBOL TABLE:
no symbols

On linux we do not replace stripped library with non-stripped one. Instead we install separate debug information file and configure debugger to reference it.

$ apt-cache search libc-
libc6-dbg - Embedded GNU C Library: detached debugging symbols
$ sudo apt-get install libc6-dbg

Packages for debug information has suffix of -dbg (it seems to be so). Installation will deploy files under /usr/lib/debug directory.

$ file /usr/lib/debug/lib/i386-linux-gnu/libc-2.17.so
/usr/lib/debug/lib/i386-linux-gnu/libc-2.17.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), BuildID[sha1]=0x81a55e819c61f581e6a9179eaf59726dd80aea31, for GNU/Linux 2.6.24, not stripped
$ objdump -t /usr/lib/debug/lib/i386-linux-gnu/libc-2.17.so

/usr/lib/debug/lib/i386-linux-gnu/libc-2.17.so:     file format elf32-i386

SYMBOL TABLE:
00000174 l    d  .note.gnu.build-id 00000000 .note.gnu.build-id
00000198 l    d  .note.ABI-tag      00000000 .note.ABI-tag

gdb is wise enough to automatically detect debug information (as long as libc6 and libc6-dbg match) and applies it!

(gdb) set verbose on
(gdb) run
...
Reading symbols from /lib/i386-linux-gnu/libc.so.6...Reading symbols from /usr/lib/debug/lib/i386-linux-gnu/libc-2.17.so...done.
done.

This entry is quite of help for debug information.

Now we can see precise stack!

]]>
Thu, 12 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/12/spell_check_in_vim.html http://skyhigh71.github.io/2013/09/12/spell_check_in_vim.html <![CDATA[spell check in vim]]>

spell check in vim

If my memory does not deceive me, I started using vi as my editor on HP-UX 10. (Yes, it is quite long time ago) I have not noticed it until quite recently though, vim (vi improved) has evolved so much more than I had expected.

I write almost all of text on vim, which includes this blog post. As seen, my vocabulary is quite limited and incorrect. And phrase does not go beyond routine. That is, I need spell checking mechanism to correct many typo I make and suggestions to meet what I want to express.

spell check

vim provides spell checking mechanism and you can enable it by:

:set spell

You will see color indicators, which tells you that there is typo. You can list correct word candidates by setting cursor on the relevant word and

z=

You will see list of words. Select appropriate one out of them.

dictionary extension

You can add words, which are not listed in default dictionary into your own custom one. Type zg(ood) on the relevant word.

zg

This operation simply adds that word into spellfile, e.g.) en.utf-8.add file under ~/.vim/spell directory.

$ cat en.utf-8.add
vim

If you would like to revert back, then type zw(rong).

zw

It seems that vim appends “/!” for that word in spell file.

$ cat en.utf-8.add
vim/!

For details of spell checking, please refer to ”:help spell”.

]]>
Thu, 12 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/10/how_to_notify_core_file_creation.html http://skyhigh71.github.io/2013/09/10/how_to_notify_core_file_creation.html <![CDATA[how to notify core file creation]]>

how to notify core file creation

Now you are able to configure system to yield core files for designated directory.

But there is no way to notice creation of core file (as apport has been disabled). Core files shall pile up silently behind the door. Repetition of core dump can easily consume disk space. You have to be careful and pay attention for such situation.

Let’s find out effective method to notify creation of core file.

inotify

inotify?

inotify (inode notify) is a Linux kernel subsystem to notice changes on file system. (quoted from wiki) You can enjoy this functionality via inotify-tools package.

Installation is quite easy (as always).

$ apt-cache search inotify-tools
inotify-tools - command-line programs providing a simple interface to inotify
$ sudo apt-get install inotify-tools
$ inotifywa<TAB>
inotifywait   inotifywatch

You can make use of these tools to watch tagger directory and do some action(s). For example,

#!/usr/bin/env bash

while inotifywait -e create,modify /tmp; do
    echo "Lupin, something happened under /tmp directory"
done

Touching file “hi” under /tmp directory will yield.

$ ./watch.sh
Setting up watches.
Watches established.
/tmp/ CREATE hi
Lupin, something happened under /tmp directory

In the above sample, inotifywait monitors following events:

  • CREATE
  • MODIFY

You can monitor other events like OPEN/CLOSE/DELETE/ATTRIB. And if you would like to configure inotify* to monitor directory recursively, then you can use -r option for that sake.

Fine.

libnotify-bin

Now we have a method of detection. Next item is means to notify you.

There is a tool called libnotify-bin, which sends message to notification daemon.

$ apt-cache search libnotify-bin
libnotify-bin - sends desktop notifications to a notification daemon (Utilities)
$ apt-get install libnotify-bin

As far as I checked, libnotify-bin has already been installed by default.

Sending a message will give you pop-up message on desktop.

$ for i in `seq 10 -1 0`
> do
> echo $i|festival --tts
> done && notify-send 'Ten Count! Knockout!'

You win!

automatic start upon login

It may not be appropriate to put here though, let’s make use of /etc/profile.d directory for automatic start up upon login.

Prepare a script, which launches inotifywait under the name of, say, watchCore.sh.

#!/usr/bin/env bash

while inotifywait -e create /var/cores; do
    notify-send -u critical "a new core file is created. Please check /var/cores directory"
done

Deploy another script under /etc/profile.d directory which launches former one.

#!/usr/bin/env bash

nohup <PATH>/watchCore.sh 1> /dev/null 2>&1 &

Please do not forget to run script in background. :-)

Voila!

]]>
Tue, 10 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/09/package_management_on_ubuntu.html http://skyhigh71.github.io/2013/09/09/package_management_on_ubuntu.html <![CDATA[package management on ubuntu]]>

package management on ubuntu

Why did you choose ubuntu as your platform? Every one must have reason(s) for that.

For me, one of main reasons is easy access of software packages. (Remember the days of Solaris 8 when one had to install all the necessary packages manually by pkgadd...)

How are software packages repositories maintained on ubuntu? Take google-chrome as example and see how it goes.

apt & dpkg

Ubuntu makes use of apt (Advanced Packaging Tool) for package maintenance. apt resolves depency among packages and gives user a convenient way of package utilization.

And apt is dependent upon dpkg. dpkg is a tool to manipulate deb format packages.

Setup & Installation

In case that you add some repository, you need to trust this repository by adding its public key. Retrieve public key and add it into key repository.

$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

The relevant public key will be stored into /etc/apt/trusted.gpg file.

$ sudo apt-key list
/etc/apt/trusted.gpg
--------------------

pub   1024D/7FAC5991 2007-03-08
uid                  Google, Inc. Linux Package Signing Key <linux-packages-keymaster@google.com>
sub   2048g/C07CB649 2007-03-08

Then add google repository in source list.

$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'

Finally install google-chrome.

$ sudo apt-get update
$ sudo apt-get install google-chrome-stable

Summary

Software package related items are stored under /etc/apt directory. In case that you add new software package, which are not listed in default repository, you need to add.

  • public key (trusted.gpg)
  • package location (<package>.list)
]]>
Mon, 09 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/07/vmplayer_abnormal_exit_on_ubuntu.html http://skyhigh71.github.io/2013/09/07/vmplayer_abnormal_exit_on_ubuntu.html <![CDATA[vmplayer abnormal exit on ubuntu]]>

vmplayer abnormal exit on ubuntu

These days on my ubuntu box I encounter abnormal exit of vmplayer upon start-up. Specifically speaking vmplayer asks for me to update kernel modules and it fails.

sigh.

problem description

It goes as follows.

$ vmplayer
Logging to /tmp/vmware-ayamada/vmware-modconfig-12999.log
... "VMWare Kernel Module Updater" windows comes up ...
$ echo $?
1

vmware-modconfig logs ends up here.

Failed to find /lib/modules/3.8.0-30-generic/build/include/linux/version.h
/lib/modules/3.8.0-30-generic/build/include/linux/version.h not found, looking for generated/uapi/linux/version.h instead.

core file indicates that the process exited by receiving SEGV.

Core was generated by /usr/lib/vmware/bin/vmware-gksu –su-mode –message=Please enter the root passw’.
Program terminated with signal 11, Segmentation fault.
#0 __strcmp_ssse3 () at ../sysdeps/i386/i686/multiarch/strcmp-ssse3.S:229

workaround

It seems to be known issue. Please refer to following thread in askubuntu.

So workaround is to:

  1. create symbolic link of version.h header for current kernel revision
  2. compile

Procedure is as follows.

$ uname -r
3.8.0-30-generic
$ sudo ln -s /usr/src/linux-headers-`uname -r`/include/generated/uapi/linux/version.h /usr/src/linux-headers-`uname -r`/include/linux/version.h
$ sudo vmware-modconfig --console --install-all

Now you can enjoy vmplayer again.

remark

There are so many headers...

$ ls -ld /usr/src/linux-headers-3.8.0-*-generic
drwxr-xr-x 7 root root 4096 May 18 14:34 /usr/src/linux-headers-3.8.0-21-generic
drwxr-xr-x 7 root root 4096 May 25 08:59 /usr/src/linux-headers-3.8.0-22-generic
drwxr-xr-x 7 root root 4096 May 31 09:59 /usr/src/linux-headers-3.8.0-23-generic
drwxr-xr-x 7 root root 4096 Jun 19 10:01 /usr/src/linux-headers-3.8.0-25-generic
drwxr-xr-x 7 root root 4096 Jul  5 09:58 /usr/src/linux-headers-3.8.0-26-generic
drwxr-xr-x 7 root root 4096 Jul 31 09:22 /usr/src/linux-headers-3.8.0-27-generic
drwxr-xr-x 7 root root 4096 Aug 21 10:07 /usr/src/linux-headers-3.8.0-29-generic
drwxr-xr-x 7 root root 4096 Sep  6 09:52 /usr/src/linux-headers-3.8.0-30-generic
]]>
Sat, 07 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/06/python_tools_invocation.html http://skyhigh71.github.io/2013/09/06/python_tools_invocation.html <![CDATA[python tools invocation]]>

python tools invocation

There are many tools (e.g. yum/hg), which are written in python, on Linux platform. For example, on my ubuntu 13.04 box there are 45 python scripts under /usr/local/bin directory.

$ file /usr/local/bin/*|grep -ic python
45

So I should say that one can not live without them.

How are these python tools invoked?

Take tikerer as example.

$ which tinker
/usr/local/bin/tinker
$ file /usr/local/bin/tinker
/usr/local/bin/tinker: Python script, ASCII text executable

So executing tinker command will hook python package.

#!/usr/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'Tinkerer==1.2.1','console_scripts','tinker'
__requires__ = 'Tinkerer==1.2.1'
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.exit(
        load_entry_point('Tinkerer==1.2.1', 'console_scripts', 'tinker')()
    )

Where does this package come from? Let’s search for its location with primitive way.

$ strace -o test.txt -e open tinker -d "test"
$ grep tinker test.txt |grep -iv enoent
...
open("/usr/local/lib/python2.7/dist-packages/tinkerer/__init__.py", O_RDONLY|O_LARGEFILE) = 3
open("/usr/local/lib/python2.7/dist-packages/tinkerer/__init__.pyc", O_RDONLY|O_LARGEFILE) = 4

Ok, here is the location where relevant python package resides.

$ ls /usr/local/lib/python2.7/dist-packages/tinkerer
__init__.py   __templates  cmdline.pyc      draft.pyc  master.py   page.py   paths.py   post.py   static  utils.py   writer.py
__init__.pyc  cmdline.py   draft.py ext        master.pyc  page.pyc  paths.pyc  post.pyc  themes  utils.pyc  writer.pyc

So when one installs python package, say, with pip command, packages will be deployed under dist-packages directory. And at the same time, a wrapper script will be deployed on search path so that it invokes relevant package by making use of pkg_resources.

]]>
Fri, 06 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/04/compare_files_recursively.html http://skyhigh71.github.io/2013/09/04/compare_files_recursively.html <![CDATA[compare files recursively]]>

compare files recursively

I have to confess that I’m so ignorant...

You can compare all the files under 2 directories recursively by diff command with -r option.

For example,

$ diff -r <DIR_A> <DIR_B>

So easy. sigh...

]]>
Wed, 04 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/04/distribute_requests_by_url_on_apache.html http://skyhigh71.github.io/2013/09/04/distribute_requests_by_url_on_apache.html <![CDATA[distribute requests by URL on apache]]>

distribute requests by URL on apache

what’s in your mind?

Suppose that you have contents built on sphinx. And you have already deployed it on apache instance.

Suppose that you would like to deploy another content. That is, you started blogging and would like to publish it on identical instance of apache.

You can configure apache instance so that it distributes content by given URL.

Let’s get started

You can create symbolic link to target directory under documentation root directory though, it seems to be quite raw. You can make use of “LocationMatch” directive.

Create configuration file, e.g. tinker.conf under /etc/httpd/conf.d directory so that httpd can evaluate request.

Alias /blog <TINKER_ROOT>/blog/html/
<LocationMatch "/blog/.*">
    AuthType Basic
    AuthName "Open Sesame!"
    AuthUserFile <PATH_TO_PASS>/htpasswd
    Require user lupin
</LocationMatch>

All request for <SERVER>/blog/* will be routed to contents under <TINKER_ROOT>/blog/html directory. Replace <TINKER_ROOT> and <PATH_TO_PASS> as appropriate to meet your environment.

In this sample, we apply basic authentication for access to this content. And user “lupin” can view content if authentication succeeds.

Create password file for lupin by htpasswd command.

$ htpasswd -c htpasswd lupin

If you would like to allow access for other users, then add accounts to user list. You can append user names by separating them with white space.

Require user lupin fujiko jigen

And restart httpd so as to reflect changes above.

$ sudo service httpd restart

Enjoy!

]]>
Wed, 04 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/02/switching_from_https_to_ssh.html http://skyhigh71.github.io/2013/09/02/switching_from_https_to_ssh.html <![CDATA[switching from HTTPS to SSH]]>

switching from HTTPS to SSH

Communication protocol for github can either be SSH or HTTPS. You can configure protocol to SSH so that ssh key shall be used for authentication. And you do not have to enter password every time you communicate with github.

URL, which includes protocol, is specified in .git/config file.

[remote "origin"]
    url = https://github.com/<USERNAME>/<REPO>.git

So as to change URL, use “git remote set-url” command.

$ git remote set-url origin git@github.com:<USERNAME>/<REPO>.git

URL in config file shall be updated as specified in argument abvoe.

[remote "origin"]
    url = git@github.com:<USERNAME>/<REPO>.git

Therefore you may be able to configure URL directly by modifying value in config file (not tested though :-)).

]]>
Mon, 02 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/01/google_anlytics_in_sphinx.html http://skyhigh71.github.io/2013/09/01/google_anlytics_in_sphinx.html <![CDATA[google anlytics in sphinx]]>

google anlytics in sphinx

If you would like to analyze statics of access to your site built with sphinx, then you can embed google analytics code in it.

Prepare layout.html file under source/_templates directory. Insert javascript code obtained at google analytics site.

{% extends "!layout.html" %}

{% block footer %}
{{ super() }}
<div class="footer">
<script>
(function(i,s,o,g,r,a,m){
    i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
        (i[r].q=i[r].q||[]).push(arguments)
    },i[r].l=1*new Date();
    a=s.createElement(o),m=s.getElementsByTagName(o)[0];
    a.async=1;a.src=g;
    m.parentNode.insertBefore(a,m)
 })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-xxxxxxxxx-x', '<site_name>');
ga('send', 'pageview');
</script>
</div>
{% endblock %}

And rebuild html and deploy it onto site.

For details, please refer to sphinx FAQ.

]]>
Sun, 01 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/01/back_up_files_onto_dropbox.html http://skyhigh71.github.io/2013/09/01/back_up_files_onto_dropbox.html <![CDATA[back up files onto dropbox]]>

back up files onto dropbox

All machine, which includes Hard Disks, are to break some day. Let’s back up your precious content onto dropbox with encrypted on regular basis.

create backup directory in dropbox.

$ mkdir ~/Dropbox/backup

Prepare backup script kicked by cron on regular interval. Create directory where backup script resides:

$ mkdir ~/Documents/cron

Deploy following backup script under cron directory:

#!/usr/bin/env bash

# we use day of week as file name
LANG=C
NAME=`date "+%A"`
# please specify password for encryption in case that you encrypt backup file
PASSWD=xxxxxx
# we create backup file under temporary directory on local host
# so as to avoid latency
TEMP=/tmp
BACKUP_DIR=~/Dropbox/backup

cd ~/Documents
if true
then
    find . -exec zip -P $PASSWD $TEMP/$NAME.zip {} \; 1>/dev/null 2>&1
else
    find . -exec zip $TEMP/$NAME.zip {} \; 1>/dev/null 2>&1
fi
mv $TEMP/$NAME.zip $BACKUP_DIR

Please be noted that you have to add execution permission for this script. And test it if it works or not.

$ bash -xv backup.sh

Register cron job to start script on regular basis by “crontab -e”. With following example, backup script is executed every one hour at *:00.

0 * * * * ~/Documents/cron/backup.sh

On ubuntu, cron activity is recorded in /var/log/syslog. Monitor syslog if cron works as expected.

]]>
Sun, 01 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/09/01/try_sphinx_better_theme.html http://skyhigh71.github.io/2013/09/01/try_sphinx_better_theme.html <![CDATA[try sphinx better theme]]>

try sphinx better theme

Found a new theme for sphinx.

Demo site looks nice & simple.

Let’s apply and see what kind of effect will have on my documentation.

Application procedure is quite simple. Install sphinx-better-theme package.

$ sudo pip install sphinx-better-theme

Apply theme by modifying conf.py.

from better import better_theme_path
html_theme_path = [better_theme_path]
html_theme = 'better'

Ok, stay with this theme.

]]>
Sun, 01 Sep 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/31/wiki_on_bitbucket.html http://skyhigh71.github.io/2013/08/31/wiki_on_bitbucket.html <![CDATA[memo on bitbucket]]>

memo on bitbucket

I would like to have place where I can put some static content created by sphinx. I hereby demonstrate procedure to deploy html files onto bitbucket.

Create a repository under the name, <username>.bitbucket.org. This repository could either be public or private. And you can select mercurial or git as your tool. In this scenario, I use hg as my tool.

Create sphinx project to host your content.

$ mkdir wiki && cd wiki
$ sphinx-quickstart

Write up your content and build it as usual.

$ make html

Copy content from build/html directory.

$ mkdir publish
$ cd publish/
$ rsync -av ../build/html/ .

Now the content is ready for upload, so prepare for publication.

Install mercurial if you still do not have one at your hand.

$ sudo apt-get install python-dev
$ sudo pip install mercurial
$ which hg
/usr/local/bin/hg

OK, now we have hg.

Create .hgrc file under home directory.

[ui]
username = Fist LAST <your_mail_address>
verbose = True

Create ssh key for bitbucket.

$ ssh-keygen

~/.ssh$ ls id_rsa.bitbucket*
id_rsa.bitbucket  id_rsa.bitbucket.pub

Configure ~/.ssh/config file so that newly created ssh key be referenced for bitbucket access.

Host bitbucket.org
    HostName        bitbucket.org
    IdentityFile    ~/.ssh/id_rsa.bitbucket.org
    User            hg

And deploy your public key onto bitbucket.

$ xclip -sel clip < id_rsa.bitbucket.pub

And place hgrc file under .hg directory of publish directory so that push target shall be specified.

[paths]
default=ssh://hg@bitbucket.org/<userid>/<userid>.bitbucket.org

And push your content onto bitbucket.

$ rsync -av ../build/html .
$ hg add .
$ hg commit -m "my first post of mine"
$ hg push

Now you can access your content by http://<userid>.bitbucket.org.

]]>
Sat, 31 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/30/blockdiag_in_tinkerer.html http://skyhigh71.github.io/2013/08/30/blockdiag_in_tinkerer.html <![CDATA[chart in blog post]]>

chart in blog post

Same as sphinx, you can make use of blockdiag as extension in tinkerer. That is, you can draw chart in your blog by blockdiag.`

Here is a quick setup procedure.

First install sphinxcontrib-blockdiag package.

$ sudo pip install sphinxcontrib-blockdiag

Then add ‘sphinxcontrib.blockdiag’ in the list of extensions in conf.py.

extensions = ['tinkerer.ext.blog', 'tinkerer.ext.disqus', 'sphinxcontrib.blockdiag']

You can specify font of your choice used in blockdiag. Search for path of your font.

$ fc-list

And specify font paht as value of blockdiag_fontpath.

blockdiag_fontpath = '/usr/share/fonts/truetype/ttf-dejavu/DejaVuSansMono.ttf'

Now it’s ready for usage.

For example,

None

]]>
Fri, 30 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/30/prevent_broken_pipe_during_ssh_session.html http://skyhigh71.github.io/2013/08/30/prevent_broken_pipe_during_ssh_session.html <![CDATA[prevent broken pipe during ssh session]]>

prevent broken pipe during ssh session

From time to time, you may encounter ssh session lost with following error.

Write failed: Broken pipe

It means that write() system call failed against File Descriptor for socket of ssh session.

$ man -s2 write
...
EPIPE  fd is connected to a pipe or socket whose reading end is closed.  When this  happens  the  writing
       process  will  also  receive  a SIGPIPE signal.  (Thus, the write return value is seen only if the
       program catches, blocks or ignores this signal.)

You may be able to avoid this connection lost by keeping connection alive.

client

You can configure ssh client to send request a response from sshd in the background. You can set interval with unit in second (~/.ssh/config).

With following configuration sample, requests are sent every 2 minutes.

ServerAliveInterval 120

For details, please refer to “man ssh_config”.

server

Same as client, sshd can be configured to send request on regular basis. sshd configuraiton file is /etc/ssh/sshd_config.

ClientAliveInterval 120

For details, please refer to “man sshd_config”.

]]>
Fri, 30 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/27/http_filter.html http://skyhigh71.github.io/2013/08/27/http_filter.html <![CDATA[filter packet based upon URI]]>

filter packet based upon URI

Wireshark is your good friend. You can find fun time of examining packets with wireshark.

Sometimes you would like to filter packets and extract packets only for specific URL, say, “www.google.com”. Wireshark provides content filter for HTTP as well.

Specify filter as follows.

http contains "www.google.com"

filter manual will help you guide through details of filter configuration.

]]>
Tue, 27 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/26/automatic_yum_update.html http://skyhigh71.github.io/2013/08/26/automatic_yum_update.html <![CDATA[automatic yum update]]>

automatic yum update

Sometime you would like to configure Scientific Linux to update packages automatically without manual intervention. You can configure system as you wish and here is a short memo to achieve such.

yum-cron

You can configure cron to start yum command on daily basis to check for update. First install yum-cron command.

$ sudo yum install yum-cron

Installation will place 0yum.cron file under /etc/cron.daily directory. And cron executes yum-cron on daily basis.

You can control yum-cron’s behavior by modifying configuration file, /etc/sysconfig/yum-cron. I hereby quote some parameters in which you may be interested.

# Don't install, just check (valid: yes|no)
CHECK_ONLY=no

# Don't install, just check and download (valid: yes|no)
DOWNLOAD_ONLY=no

And start service for now. And configure system to start yum-cron service upon restart.

$ /etc/init.d/yum-cron start
$ sudo chkconfig yum-cron on

history

You can check if cron has started or not by referencing /var/log/cron* logs.

Aug 26 03:16:01 <hostname> run-parts(/etc/cron.daily)[24221]: starting 0yum.cron
Aug 26 04:07:46 <hostname> run-parts(/etc/cron.daily)[24421]: finished 0yum.cron

Log message are yielded in /var/log/yum.log file

Aug 09 04:09:11 Updated: nss-softokn-devel-3.14.3-3.el6_4.i686
Aug 09 04:09:12 Updated: nss-devel-3.14.3-4.el6_4.i686
Aug 09 04:09:13 Updated: nss-tools-3.14.3-4.el6_4.i686
Aug 14 03:36:46 Updated: httpd-tools-2.2.15-29.sl6.i686
Aug 14 03:36:48 Updated: httpd-2.2.15-29.sl6.i686

as specified in yum’s configuration file, /etc/yum.conf.

[main]
debuglevel=2
logfile=/var/log/yum.log

Or you can check history by yum history command.

$ sudo yum history
Loaded plugins: auto-update-debuginfo, downloadonly, fastestmirror, refresh-packagekit, security
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
   138 | root <root>              | 2013-08-14 03:36 | Update         |    2
   137 | root <root>              | 2013-08-09 04:09 | Update         |   12
...

And You can check more details of update with “yum history <ID>”.

sigh

Ah...

By default, SL has similar functionality yum-autoupdate, which is enabled and executed on daily basis. So above configuration is duplicate and not necessary.

You can disable yum-autoupdate by configuring /etc/sysconfig/yum-autoupdate file.

ENABLED="false"
]]>
Mon, 26 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/25/how_to_play_minecraft_on_ubuntu.html http://skyhigh71.github.io/2013/08/25/how_to_play_minecraft_on_ubuntu.html <![CDATA[how to play minecraft on ubuntu]]>

how to play minecraft on ubuntu

My daughter would like to play her favorite game, minecraft on her ubuntu box. Here is a short memo to summarize procedure to setup environment to play minecraft on ubuntu.

JVM

minecraft client is a jar file and you would better to have hotspot VM. You can install hotpost VM as follows:

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java7-installer

search path

As with default configuration shared object libjawt.so can not be found. You have to configure environment variable LD_LIBRARY_PATH so that libjawt.so can be searched for.

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/jvm/java-7-oracle/jre/lib/i386/

key input

As ibus interrupts key input on minecraft and You can not input any word on minecraft. You can avoid this problem by configuring following environment variable:

XMODIFIERS=@im=none

script

Here is sample script to launch minecraft. If you would like to speak something upon start-up, please install TTS engine, festival.

#!/usr/bin/env bash

LANG=C
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/jvm/java-7-oracle/jre/lib/i386/
XMODIFIERS=@im=none
export LANG LD_LIBRARY_PATH XMODIFIERS

MINE_HOME=~/minecraft
cd $MINE_HOME

START_MSG="start minecraft"
echo $START_MSG|festival --tts

LOG=play.`date "+%Y%m%d"`.log

echo "START: " `date` >> $LOG
java -Xmx1024M -Xms512M -cp minecraft.jar net.minecraft.LauncherFrame >> $LOG 2>&1
echo "FINISH: " `date` >> $LOG

register unity panel

You can launch above script directly from unity panel.

  • desktop file

Create desktop configuration file (e.g. minecraft.desktop) so that unity can evaluate how to launch application.

[Desktop Entry]
Type=Application
Terminal=false
Name=minecraft
Icon=/PATH_TO_ICON_DIR/minecraft.ico
Exec=/PATH_TO_SCRIPT_DIR/minecraft.sh

Put this desktop configuration file under /usr/share/applications directory.

  • unity panel

And drag & drop this desktop configuration file onto unity panel.

]]>
Sun, 25 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/25/add_favicon.html http://skyhigh71.github.io/2013/08/25/add_favicon.html <![CDATA[add favicon]]>

add favicon

You can configure favicon for your blog.

Prepare your favorite icon and place it under _static directory.

$ file _static/skyhigh71.ico
_static/skyhigh71.ico: MS Windows icon resource - 1 icon

And point newly deployed icon file in conf.py.

# Change your favicon (new favicon goes in _static directory)
html_favicon = 'skyhigh71.ico'
]]>
Sun, 25 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/24/customize_tinkerer_page_continued.html http://skyhigh71.github.io/2013/08/24/customize_tinkerer_page_continued.html <![CDATA[customize tinkerer page continued]]>

customize tinkerer page continued

language

LANG environment variable of my shell is configured to ja_JP.UTF-8.

$ echo $LANG
ja_JP.UTF-8

Therefore I first thought that I have to set LANG=C upon building html so as to obtain English output.

$ LANG=C tinker -b

But it’s confirmed that language parameter in conf.py is valid for “en” as well. value “en” does not work. You will encounter error stating that “loading translations [en]... locale not available” sorry...

language = "en"

And if you would like to configure, say, German as default language of your blog, then configure it as “de”.

language = "de"

But localization is not perfect, for example, date seems to be yielded based upon LANG. So you would better to set LANG upon building html.

$ LANG=de_DE.utf8 tinker -b

theme

You can select theme among following options and set theme in theme in conf.py.

  • flat (default)
  • modern5
  • minimal5
  • responsive
  • dark

post per page

Sometimes you find that default post per page (10) seems to be too long. You can configure post per page by posts_per_page parameter.

# Number of blog posts per page
posts_per_page = 3
]]>
Sat, 24 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/24/how_to_yield_core_file.html http://skyhigh71.github.io/2013/08/24/how_to_yield_core_file.html <![CDATA[how to yield core file]]>

how to yield core file

By default, gdb is given on ubuntu desktop, so you can enjoy debugging applications (if you like :-)).

$ which gdb
/usr/bin/gdb
$ gdb -v
GNU gdb (GDB) 7.5.91.20130417-cvs-ubuntu

But it seems that core file of application is not yielded (quite natural). Here is a memo to describe procedure how to configure system to yield core file.

ulimit

Core file size is set to size zero.

$ ulimit -a
core file size          (blocks, -c) 0

You can change size to some value by ulimit command. If you would like to set permanently, then configure its value in /etc/security/limits.conf.

*               soft    core            unlimited

In this case, soft limit for default entry is set to unlimited. For details, please refer to man page of limits.conf.

core file pattern

By default, application failures will be reported back to developers via apport.

application failure is passed to apport application via pipe as described in /proc/sys/kernel/core_pattern file.

$ cat /proc/sys/kernel/core_pattern
\|/usr/share/apport/apport %p %s %c

Therefore you change output location of core file by manipulating core_pattern file. Following is the sample to configure core files will be yielded under /var/cores directory with file name of core.<executable_name>.<PID>.<timestamp>.

$ sudo mkdir /var/cores
$ sudo chmod a+w /var/cores
$ sudo bash -c "echo /var/cores/core.%e.%p.%t > /proc/sys/kernel/core_pattern"

permanent change

Above change will be lost upon system reboot. So as to make change reflected over reboot, add following line in /etc/sysctl.conf.

kernel.core_pattern=/var/cores/core.%e.%p.%t

Please be noted that apport will overwrite this kernel parameter as follows (/etc/init/apport.conf).

echo "\|/usr/share/apport/apport %p %s %c" > /proc/sys/kernel/core_pattern

This phenomenon seems to have already been discussed in bug #1080978.

As temporary measure to avoid this overwrite, let us disable apport for now (/etc/default/apport).

#enabled=1
enabled=0

summary

Now you will see core files. Enjoy your debugging life.

By the way, repetition of application failure will consume disk space. Please be careful and monitor free disk space.

]]>
Sat, 24 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/23/customize_tinker_page.html http://skyhigh71.github.io/2013/08/23/customize_tinker_page.html <![CDATA[customize tinkerer page]]>

customize tinkerer page

tweet button

You can add tweet button on each post. Here is sample code needed to be added in page.html file under _templates directory.

<!-- Twitter button -->
<a href="https://twitter.com/share" class="twitter-share-button">Tweet</a>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>

This code can be created mannualy by referencing twitter site.

google analytics

You can take statistics of your blog by google analytics. oN google analytics site create Tracking ID for your blog site, which yields javascript like this.

Please be noted that you have to exclude <script> tags.

(function(i,s,o,g,r,a,m){
    i['GoogleAnalyticsObject']=r;
    i[r]=i[r]||function(){
        (i[r].q=i[r].q||[]).push(arguments)
    },i[r].l=1*new Date();
    a=s.createElement(o),m=s.getElementsByTagName(o)[0];
    a.async=1;
    a.src=g;
    m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-xxxxxxxx-1', '<your_site>');
ga('send', 'pageview');

Save this code as google_analytics.js under _static directory.

And add following lines into page.html so as to incorporate this javascript.

{% extends "!page.html" %}

{% set script_files = script_files + ["_static/google_analytics.js"] %}

With above configurations applied, you will see page.html as follows.

{% extends "!page.html" %}

{% set script_files = script_files + ["_static/google_analytics.js"] %}

{%- block body %}
    <div class="section_head">
    <div class="timestamp_layout">
      {{ timestamp(metadata.formatted_date) }}
    </div>
    {% block buttons %}
    <div class="buttons">
      <!-- Twitter button -->
      <a href="https://twitter.com/share" class="twitter-share-button">Tweet</a>
      <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
    </div>
    {% endblock %}
    </div>
    {{ link }}
    {{ body }}
    {{ post_meta(metadata) }}
    {{ comments }}
{% endblock %}
]]>
Fri, 23 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/22/enable_sshd.html http://skyhigh71.github.io/2013/08/22/enable_sshd.html <![CDATA[enable sshd]]>

enable sshd

By default, sshd is disabled on ubuntu desktop. Therefore you need to install & enable sshd if you would like to access host remotely via ssh.

Here is short memo for this sake.

First, search for sshd package via apt-cache command.

$ sudo apt-cache search sshd
openssh-server - secure shell (SSH) server, for secure access from remote machines

$ sudo apt-cache show openssh-server
Package: openssh-server
...
Description-en: secure shell (SSH) server, for secure access from remote machines
...
This package provides the sshd server.

OK, openssh-server seems to be what we are looking for.

$ sudo apt-get install openssh-server

Installation will automatically start sshd daemon.

$ sudo lsof -nPi:22
COMMAND  PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd    3557    root    3u  IPv4  35564      0t0  TCP *:22 (LISTEN)
sshd    3557    root    4u  IPv6  35566      0t0  TCP *:22 (LISTEN)

$ sudo service ssh status
ssh start/running, process 3557

Please be noted, sshd daemon is registered not as sshd but as ssh.

sshd configuration can be set in /etc/ssh/sshd_config. In case that you would like to disable remote root access, then change parameter PermitRootLogin value from yes to no.

PermitRootLogin no
]]>
Thu, 22 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/21/swap_caps_lock_with_ctrl.html http://skyhigh71.github.io/2013/08/21/swap_caps_lock_with_ctrl.html <![CDATA[swap Caps Lock key with Ctrl key]]>

swap Caps Lock key with Ctrl key

I have been using Happy Hacking Keyboard (HHK) for about decade. It’s really nice one, but it becomes lame at last and key touch is quite noisy. So I bought a cheap English layout keyboard at amazon. (good-by HHK)

The problem here is that “caps lock” resides in the left-middle. This is quite inconvenient. So I replaced “caps lock” key with “control” by configuring keyboard mapping.

Here is its procedure on ubuntu 13.04 (LXDE) to do so.

Edit keyboard file and applying argument “ctrl:swapcaps” for XKBOPTIONS option.

$ cd /etc/default
$ diff keyboard keyboard.org
11,12c11
< #XKBOPTIONS=""
< XKBOPTIONS="ctrl:swapcaps"
---
> XKBOPTIONS=""

Or you can specify “ctrl:nocaps” in case that you assign control key for both.

Then execute dpkg-reconfigure command so that above change will take effect. Please be noted that rebooting system has no effect for this keyboard configuration change and it remains as they were.

$ sudo dpkg-reconfigure keyboard-configuration

Answer to questions given by command and at last stage you will be prompted if you save above configuration change.

Following emacs page has quite comprehensive explanation for this sake, which helps must help you.

And following debian page has explanation as to keyboard configuration.

Please be noted, above dpkg-reconfigure command updates initrd.img-X.X.X-XX-generic file under /boot directory. Therefore in case that OS updates image file, then you have to re-execute dpkg-reconfigure command to reflect change onto image file.

Please be noted that there seems to be more work to be done, as change of configuration is lost after rebooting system. sorry...

It seems that following thread shall be similar issue of this.

]]>
Wed, 21 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/11/the_first_post_of_mine.html http://skyhigh71.github.io/2013/08/11/the_first_post_of_mine.html <![CDATA[the first post of mine]]>

the first post of mine

A short memo to describe procedure how to host blog on github.

github configuration

Let’s create public key and private key.

$ ssh-keygen
Generating public/private rsa key pair.
...

They are yielded under ~/.ssh directory or under current working directory in case that you do not specify its path.

$ file *
id_rsa.github:     PEM RSA private key
id_rsa.github.pub: OpenSSH RSA public key

Configure ssh so that newly created private key will be applied for access against github.

~/.ssh$ more config
Host github.com
    HostName      github.com
    IdentityFile  ~/.ssh/id_rsa.github
    User          git

Deploy public key on github.

$ xclip -sel clip < ~/.ssh/id_rsa.github.pub

Now let’s make a test connection to github.

$ ssh -v -T git@github.com

Let’s install git command.

$ sudo apt-get install git

Configure user name and e-mail address for github,

$ git config --global user.name <USERNAME>
$ git config --global user.email <YOUR_EMAIL_ADDRESS>

which will be reflected into .gitconfig file under home directory.

[user]
        name = <USERNAME>
        email = <YOUR_EMAIL_ADDRESS>

Enable password cache so as to avoid repetition of password input.

$ git config --global credential.helper cache
$ git config --global credential.helper 'cache --timeout=3600'

These configurations will be reflected int .gitconfig file as well. Or you can write it by hand. (maybe)

[credential]
        helper = cache --timeout=3600

tinkerer configuration

You can install tinkerer with pip command.

$ sudo pip install tinkerer

This command execution will install dependent modules like sphinx if not there. The installed version (on ubuntu 13.04) is as follows:

$ tinker -v
Tinkerer version 1.2.1

Create local directory to host blog.

$ mkdir blog

Execute tinker command (not tinkerer) with setup option.

$ tinker --setup
Your new blog is almost ready!
You just need to edit a couple of lines in conf.py
$ tree .
    .
    ├── _static
    ├── _templates
    │   ├── page.rst
    │   └── post.rst
    ├── conf.py
    ├── drafts
    ├── index.html
    └── master.rst

Configure configuration file, conf.py (same as sphinx’s one) according to your appetite.

$ diff conf.py.org conf.py
11c11
< project = 'My blog'
---
> project = "sakana"
14c14
< tagline = 'Add intelligent tagline here'
---
> tagline = 'short memo by SkyHigh71'
20c20
< author = 'Winston Smith'
---
> author = 'SkyHigh71'
23c23
< copyright = '1984, ' + author
---
> copyright = '2013-, ' + author
26c26
< website = 'http://127.0.0.1/blog/html/'
---
> website = 'http://skyhigh71.github.com'

That’s all you have to configure upon initial configuration on local side. And you can modify parameter(s) later on.

Create a new repository named <account.github.io>, e.g.) SkyHigh71.github.io on github.

On the other hand, create local directory to host HTML files for publication.

$ mkdir publish
$ cd publish
$ git init
$ git remote add origin https://github.com/SkyHigh71/SkyHigh71.github.io.git

github URL will be reflected in config file under .git directory.

[remote "origin"]
        url = https://github.com/SkyHigh71/SkyHigh71.github.io.git

Disable jekyll.

$ touch .nojekyll

write up a post and publish it

tinker command will create a RST file under YYYY/MM/DD directory:

Start preparation for post frst as draft. This will create rst file under draft directory.

$ tinker -d "the First Post of mine"
$ ls drafts
the_first_post_of_mine.rst

After having finished editing your post and you are ready to post it, then it’s time to post. Following command will move RST file from draft directory to YYYY/MM/DD directory and be reflected in master.rst file.

$ tinker -p drafts/the_first_post_of_mine.rst

And build it to yield html file. Or you can revert back to draft if you find something needs to be modified.

$ LANG=C tinker -b

This will create HTML files under blog/html directory.

Copy files to publish directory and commit it.

$ cd publish
$ rsync -av ../blog/html/ .

$ git add .
$ git commit -m "first post"

Finally publish post to github. By the way, please be noted that you need to select not gh-pages branch but master branch for user’s blog. (It seems that there is a confusion about which branch to choose)

$ git push origin master

Please be patient, as it takes several minutes until posted page will come up. After a while, you can access your blog with URL of http://<account>.github.io/.

]]>
Sun, 11 Aug 2013 00:00:00 +0900
http://skyhigh71.github.io/2013/08/11/hello_world.html http://skyhigh71.github.io/2013/08/11/hello_world.html <![CDATA[Hello World]]>

Hello World

How are you doing?

]]>
Sun, 11 Aug 2013 00:00:00 +0900