Wednesday, December 19, 2007

Polymeraze: Time for merging

Polymeraze is a tool that I wrote about two years ago. This package allowed me to cope with more than ten Gentoo servers at the same time. One of the main problems with many machines is to keep up with the configuration of the machines.

Polymeraze allows you to create kind of knowledge base for configurations that can be applied to multiple hosts. It basically uses some template configurations which are being converted into the final configuration files by combining them with the specific host profile.

While this tool was extremely handy it was also a bad hack since I wired it directly into the Gentoo package management system. This basically never really worked but since I knew how to handle it I just didn't mind too much.

A while ago I realized that this tool has a few common properties with the good old "kolabconf" from the Kolab project. And since then I have dreamed of replacing kolabconf with a rewritten polymeraze.

I hope that this merge will finally lead to a tool that will allow to get a really solid Kolab server natively integrated into the Gentoo system.

Well, there is still plenty of work ahead. All I did so far is getting the project page up. And now it is time for coding...

Piping your blogroll

Finally I took the time to polish the blog layout. I took a look at my old blog and tried to convert all blod elements I had over there to this new site. Most stuff was rather easy. When it came to the blog roll I discovered that Google Reader is a decent friend here. They already offer a small widget that can be added to the blogger.com site. It displays the feeds from a specific public folder that I can administrate within Google reader. Not that I'm really using the Reader for keeping up to date with my feeds, but keeping a blog roll there is fine. What I wanted in addition was a combined blog roll news feed. This wasn't available but I went over to Yahoo pipes and cloned an older pipe into a pipe that would grab and aggregate the feeds from a Google Reader folder. Works like a charm. All I had to do was to add a rather long feed URL to my side panel:
http://pipes.yahoo.com/pipes/pipe.run?_id=58761a0ba880bdad87d1f\
69d944b78e&_render=rss&opml=http%3A%2F%2Fwww.google.com%2Freade\
r%2Fpublic%2Fsubscriptions%2Fuser%2F02645926629531261525%2Flabe\
l%2Ftechnical
The result is on the right in the little Blogroll - News box.
A short while later
I realized that I also get the same kind of feed from reader page directly. Stupid me. Anyhow, it was fun playing with Yahoo pipes...

Thursday, December 06, 2007

A mercurial patch update session

We discussed patch management recently on the Kolab development mailing list and today it was time to upgrade to php-5.2.5. I had to run a patch update cycle again. Some time ago Thomas recommended using mercurial for the patch management. And I must admit that this completely changed the way I deal with code patches and it made my life a lot easier. Let me describe how I currently deal with patches when upstream delivers a new version.
Setup
Some time ago I started the patch management on php by downloading the package ...
> wget ftp://php.net/distributions/php-5.2.4.tar.bz2
... and unpacking it:
> tar xfj php-5.2.4.tar.bz2
Now I turned this version into a mercurial repository and added all files to version control:
> cd php-5.2.4
> hg init
> hg commit --addremove -m "php-5.2.4"
> cd ..
Since I needed to patch php I derived (cloned) a second repository from the original one:
> hg clone php-5.2.4 php-PATCHED
In order to add patches to a repository you have to activate the queue extension within mercurial:
> cat ~/.hgrc
[ui]
username = Gunnar Wrobel
[extensions]
hgext.mq =
That allows to add patches on top of a version controlled repository. This had to be activated on the php-PATCHED repository:
> cd php-PATCHED
> hg qinit -c
The -c option makes the new patch directory that got created by qinit under php-PATCHES/.hg/patches a version controlled directory. This is not strictly necessary but I find it convenient to version control the patches, too. For the Kolab specific patching one patch had to be added to php-PATCHED:
> hg qnew KOLAB_Annotation.patch
> patch -p1 < ~/Kolab-php.patch
> hg qrefresh -m "Provides get/set ANNOTATIONS support to PHP. [Version: 5.2.4]"
qrefresh compiled the changes in php-PATCHED into the KOLAB_Annotation.patch within php-PATCHES/.hg/patches. Now I was able to grab the patch from that location and apply it to the modified packages on both the OpenPKG platform and Gentoo
Update
The whole setup would be useless if we wouldn't need to upgrade our patches from time to time. The following describes the cycle that I perform once upstream released a new version. Fetch the new version:
> wget ftp://php.net/distributions/php-5.2.5.tar.bz2
Now we clone the original version ...
> hg clone php-5.2.4 php-5.2.5
... and replace it with the new one:
> cd php-5.2.5
> hg locate -0 | xargs -0 rm
> cd ..
> tar xfj php-5.2.5.tar.bz2
> cd php-5.2.5
> hg commit --addremove -m "php-5.2.5"
> cd ..
Now the original repository holds the new version and we need to update our patched version to this. For this we first need to remove all applied patches since we don't know if they apply any longer:
> cd php-PATCHED
> hg qpop -a
qpop -a removes all currently applied patches. They are of course still present in our php-PATCHED/.hg/patches directory. But they are not applied. This allows to cleanly update our base php version now:
> hg pull ../php-5.2.5
> hg update
Pulling and updating brings the php-PATCHED directory to the php-5.2.5 version. Now we can push the stack of patches again and fix any that do not cleanly apply:
> hg qpush
In case our patch fails we need to modify the source until everything is working fine again. Often the changes will be minimal. After the updates the patch has to be refreshed:
> hg qrefresh -m "Provides get/set ANNOTATIONS support to PHP. [Version: 5.2.5]"
> hg qci -m "PHP patch for 5.2.5"
The final command runs hg commit within the php-PATCHED/.hg/patches directory. For a single patch the whole procedure might be a little bit too complicated. It is rather clean though. But the whole story gets really efficient once you have about twenty patches on an application. In that case the mercurial queue extensions comes in really handy.

K2G News: Update of the project site

The project site got a few updates and new links. In addition a Ohloh project summary has been created.

K2G: Kolab2/Gentoo getting attention again

Finally the Kolab2Gentoo project gets some attention again. I started fixing some older bugs but the main part will be to prepare for 2.2 now. I still hate the way the whole project is structured but there are too many places where massive improvements are needed. So I guess I have to be content with small progress. The stuff that is currently on my mind concerning Kolab2/Gentoo-2.2:
  • Update to Horde-3.2
  • Making the patched c-client and php packages recommended instead of required
  • Switching to a specific cyrus-imapd-kolab package
  • Template packages
And then there is the big, bad problem of the Kolab configuration concept that simply fails for Gentoo in its current form. The solution to that one will take even longer.

Tuesday, December 04, 2007

Creating the coding environment for an older library

Looks like it is time to finally revive my old p@rdus library. It found a new home on SourceForge and I started documenting it on Ohloh. Now I just have to start coding on it again. What it does? It is a python library that mainly handles configuration settings for command line tools. I intend to extend it in some other areas and it will probably remain a collection of tools required for my other python coding projects.

Friday, November 16, 2007

Sometimes it is the first time: Disc crashed in RAID

I cannot say that I never experienced a disc crash so far. It happened extremely seldom though and having a disc crash in a RAID array was definitely a first time. And to my own shame I can't really say that I handled it with grace. Well, sometimes it is the first time and you have to learn stuff. Two days ago a customers mail server deadlocked. I got the call early next morning and indeed I was unable to do anything with the machine. The only choice left was rebooting. The machine came up but it was suspiciously slow. Turned out that the RAID array was constantly trying to resynchronize a central partition of the system. I made an error here and assumed that I should try to fiddle with the hard disk at this point since the workday was just beginning and the customer would have been quite unhappy if his customers would have been without mail. In fact I should have just ran smartctl at this point. It would have told me something like this:
> smartctl -a /dev/sdb

...

SMART Error Log Version: 1
ATA Error Count: 206 (device log contains only the most recent five errors)

...

 Error 206 occurred at disk power-on lifetime: 3931 hours (163 days + 19 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 50 06 49 e0  Error: UNC at LBA = 0x00490650 = 4785744

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 25 00 08 50 06 49 e0 00      06:40:08.182  READ DMA EXT
 27 00 00 00 00 00 e0 00      06:40:08.128  READ NATIVE MAX ADDRESS EXT
 ec 00 00 00 00 00 a0 02      06:40:08.124  IDENTIFY DEVICE
 ef 03 46 00 00 00 a0 02      06:40:06.213  SET FEATURES [Set transfer mode]
 27 00 00 00 00 00 e0 00      06:40:06.145  READ NATIVE MAX ADDRESS EXT

...
Since the partition /dev/sdb8 did cause the problem I should have now removed the disk from the RAID array by issuing:
> mdadm /dev/md8 --fail /dev/sdb8
> mdadm /dev/md8 --remove /dev/sdb8
This would have saved some trouble yesterday. Since the machine constantly tried to resync the broken partition the system was highly unstable and I had to reboot twice again. The machine went to hell the following night again, requiring a new reboot the next morning. But since we were in the weekend then I had no reservations about checking the machine anymore. I informed the server support and they swapped the disk. At that point I had to copy the partition table from /dev/sda to the fresh /dev/sdb using fdisk:
> fdisk /dev/sda

Command (m for Help): p

Disk /dev/sda: 250.0 GByte, 250059350016 Byte
255 Heads, 63 Sectors/Tracks, 30401 Cylinder

Device boot. Start End Blocks Id System
/dev/sda1 1 128 1028128+ fd Linux raid autodetect
/dev/sda2 129 383 2048287+ 82 Linux Swap / Solaris
/dev/sda4 384 30401 241119585 5 Extended
/dev/sda5 384 1021 5124703+ fd Linux raid autodetect
/dev/sda6 1022 1659 5124703+ fd Linux raid autodetect
/dev/sda7 1660 4209 20482843+ fd Linux raid autodetect
/dev/sda8 4210 30401 210387208+ fd Linux raid autodetect
Using n for creating new partitions and t for setting the correct partition types the layout got copied to /dev/sdb. Now in recovery mode the final step required embedding the new disc in the RAID array again:
rescue:~# mdadm --manage /dev/md1 --add /dev/sdb1
rescue:~# mdadm --manage /dev/md5 --add /dev/sdb5
rescue:~# mdadm --manage /dev/md6 --add /dev/sdb6
rescue:~# mdadm --manage /dev/md7 --add /dev/sdb7
rescue:~# mdadm --manage /dev/md8 --add /dev/sdb8
/proc/mdstat shows the progress on this while the resynchronisation runs:
rescue:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [faulty]
md1 : active raid1 sdb1[1] sda1[0]
1028032 blocks [2/2] [UU]

md5 : active raid1 sdb5[1] sda5[0]
5124608 blocks [2/2] [UU]

md6 : active raid1 sdb6[2] sda6[0]
5124608 blocks [2/1] [U_]
resync=DELAYED

md7 : active raid1 sdb7[2] sda7[0]
20482752 blocks [2/1] [U_]
resync=DELAYED

md8 : active raid1 sdb8[2] sda8[0]
210387136 blocks [2/1] [U_]
[==============>......] recovery = 70.1% (147615104/210387136) finish=18.5min speed=56316K/sec

unused devices:
Looking back I must say it was nothing dramatic at all. The only problem was my lack of knowledge on how to handle the situation. So I figured it is worth posting it.