Sunday, December 28, 2008

layman-1.2.2 is out

The next layman version has been released and fixes a few minor bugs:
  • layman -L: better use of screen real estate for source URLs (#251032, submitted by Martin von Gagern)
  • Execute subprocesses in a shell. (#235165)
  • layman/overlays/git.py (GitOverlay.sync): app-portage/layman - 'layman -S --quiet' yields "git: 'pull-q' is not a git-command." (#247964)
Thanks to all the contributers!

Saturday, November 15, 2008

layman-1.2.1 is out

The next layman version has been released and fixes a few minor bugs:
  • Fixes for python-2.6 (#237625, submitted by Mike Auty)
  • Better locale support (#235165, submitted by A. F. T. Arahesis)
  • Handle git+ssh://, ssh:// correctly (#230702, submitted by Donnie Berkholz)
  • Do not remove directories if adding an overlay failed (#236945)
In addition there is a feature enhancement:
  • Pass --quiet flag down to the version control system (#236165, submitted by A. F. T. Arahesis).
Thanks to all the contributers!

Wednesday, August 06, 2008

Distributed burden

I just found my old layman article is available for free. It probably has been accessible for a while already but I didn't know, so I thought I mention it here. It has been written for the German Linux Magazin so it is available in German only.

As a response to this little article I got a short e-mail about a week later. Patricia Jung asked me whether I'd be interested in writing a whole book about Gentoo. And I was. As people probably know...

Wednesday, July 30, 2008

Editing posts in blogger.com without getting them in the RSS again?

Sorry for spamming planet.gentoo.org with old stuff. I simply edited labels on old posts which apparently modified the last edit date and pushed them into the RSS feed again. Any hint on how I can prevent such a thing when blogging via blogger.com?

Tuesday, July 29, 2008

Horde_Kolab_Server-0.1.0 and Horde_Kolab_Format-0.1.1 have been released!

The Horde project released the second PHP PEAR package representing a small subpart of the Kolab functionality within the Horde framework: Horde_Kolab_Server.

The package allows to access the Kolab LDAP database. Some examples are being given on a page in the Kolab wiki.

The package is the second in a series of five packages that will be released over the next few months. The full set of packages will allow you to easily deal with data stored on a Kolab server within your own web applications.

In addition the Horde_Kolab_Format-0.1.1 has been released. This is a bug fix release.

Wednesday, July 16, 2008

Kolab on Gmail

In the recent weeks the Kolab specific code in Horde has been significantly restructured to make it more developer-friendly. This cleanup also made it easy to add a small hack that allows you to run Horde with a standard IMAP server that provides no support for folder annotations as a back end.

The Kolab concept is based on IMAP folder annotations but so far the feature is only being provided by the Cyrus IMAP server. In addition the Kolab Server uses some patches in that area which means that you always need a full Kolab Server as a basis for Kolab specific development.

For Horde this means that the other developers have no chance to test the Kolab specific code sections even if they sometimes need to touch these areas. But installing a Kolab server is too much of a hurdle.

So I always wanted to allow running the Kolab code on a plain IMAP server. And ever since Gmail started providing IMAP access I considered the idea of Horde/Kolab on Gmail as a back end a nice toy thing.

Today the code that allows this went into Horde CVS. It is far from finished but it is sufficient to provide you with a demo installation.

You can use standard Gmail credentials there. But please be aware that I could grab these credentials! So you should only use a dummy account in your own best interest.

You'll certainly find many bugs or things that are not working yet but it is of course just a demonstration.

This line of coding is something I won't invest too much time into. It will never get any support from Kolab (as using annotations is the better solution) and I don't guarantee you that the format I'm using stays the same. So if you'd start using the code in a production environment the next upgrade might prevent access to the old data.

The main intention of this is to ease access to the code and allow more people to play with it.

The way things work at the moment is a special Kolab XML format for storing annotations. Each folder gets a single message in this format that carries the UID "1". This message holds all annotation values you'd usually store as folder metadata.

If you want to configure a Horde CVS installation specifically for Gmail you will still need to patch some IMAP parts. For other IMAP servers this might not be necessary.

Friday, July 11, 2008

First part of SyncML concluded

p@rdus completed coding on SyncML for Kolab a while ago and after some serious testing Univention uses the code in production.

In addition a short press release has been issued on several channels.

Horde_Kolab_Format-0.1.0 has been released!

The Horde project released a first PHP PEAR package representing a small subpart of the Kolab functionality within the Horde framework: Horde_Kolab_Format.

The package allows you to read and write the Kolab XML format. The XML data is converted from or to a data array within PHP. Some examples are being given on a page in the Kolab wiki.

The package is the first in a series of five packages that will be released over the next few months. The full set of packages will allow you to easily deal with data stored on a Kolab server within your own web applications.

Wednesday, July 09, 2008

Generating a PEAR test environment for releasing PEAR packages

As a developer you will often have a high number of different libraries installed. This will usually not match the users situation that will be nearer to the minimal number of required libraries. This may turn out to be a problem when releasing packages as missing dependencies can be easily overlooked.

For releasing PHP PEAR packages it makes sense to get a separate PEAR environment for testing if the package to be release has a correct set of dependencies. If PEAR is already installed on the system, the steps for that are straight forward.

Setup the new repository

Create the test environment with

# mkdir ~/pear-test

A separate PEAR configuration will be needed there:

# pear config-create ~/pear-test ~/pear-test/.pearrc

CONFIGURATION (CHANNEL PEAR.PHP.NET):
=====================================
...

All that is required to complete the test environment is the installation of PEAR itself:

# pear -c ~/pear-test/.pearrc install -o PEAR

WARNING: channel "pear.php.net" has updated its protocols, use "channel-update pear.php.net" to update
Did not download optional dependencies: pear/XML_RPC, use --alldeps to download automatically
pear/PEAR can optionally use package "pear/XML_RPC" (version >= 1.4.0)
downloading PEAR-1.7.2.tgz ...
Starting to download PEAR-1.7.2.tgz (302,744 bytes)
..........................done: 302,744 bytes
downloading Archive_Tar-1.3.2.tgz ...
Starting to download Archive_Tar-1.3.2.tgz (17,150 bytes)
...done: 17,150 bytes
downloading Structures_Graph-1.0.2.tgz ...
Starting to download Structures_Graph-1.0.2.tgz (30,947 bytes)
...done: 30,947 bytes
downloading Console_Getopt-1.2.3.tgz ...
Starting to download Console_Getopt-1.2.3.tgz (4,011 bytes)
...done: 4,011 bytes
install ok: channel://pear.php.net/Archive_Tar-1.3.2
install ok: channel://pear.php.net/Structures_Graph-1.0.2
install ok: channel://pear.php.net/Console_Getopt-1.2.3
install ok: channel://pear.php.net/PEAR-1.7.2
PEAR: Optional feature webinstaller available (PEAR's web-based installer)
PEAR: Optional feature gtkinstaller available (PEAR's PHP-GTK-based installer)
PEAR: Optional feature gtk2installer available (PEAR's PHP-GTK2-based installer)
PEAR: To install optional features use "pear install pear/PEAR#featurename"

A real example

As I am currently working on releasing the Kolab modules in Horde as PEAR packages I'll provide this process as an example.

In order to wrap a PEAR package the appropriate channel needs to be known for pear. For horde packages this is pear.horde.org. This channel needs to be discovered first:

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc channel-discover pear.horde.org
Adding Channel "pear.horde.org" succeeded
Discovery of channel "pear.horde.org" succeeded

The initial package to be released will be Kolab_Format. PEAR packaging happens with

# cd Kolab_Format
# ~/pear-test/pear/pear -c ~/pear-test/.pearrc package package.xml

Analyzing lib/Horde/Kolab/Format/XML/contact.php
Analyzing lib/Horde/Kolab/Format/XML/distributionlist.php
Analyzing lib/Horde/Kolab/Format/XML/event.php
Analyzing lib/Horde/Kolab/Format/XML/hprefs.php
Analyzing lib/Horde/Kolab/Format/XML/note.php
Analyzing lib/Horde/Kolab/Format/XML/task.php
Analyzing lib/Horde/Kolab/Format/Date.php
Analyzing lib/Horde/Kolab/Format/XML.php
Analyzing lib/Horde/Kolab/Format.php
Package Horde_Kolab_Format-0.9.0.tgz done
Tag the released code with `pear cvstag package.xml'
(or set the CVS tag RELEASE_0_9_0 by hand)

As most Horde PEAR packages have not yet been marked stable PEAR will still refuse to install the new package:

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc  install Horde_Kolab_Format-0.9.0.tgz 

Failed to download horde/Horde_DOM within preferred state "stable", latest release is version 0.1.0, stability "alpha", use "channel://pear.horde.org/Horde_DOM-0.1.0" to install
Failed to download horde/Horde_NLS within preferred state "stable", latest release is version 0.0.2, stability "alpha", use "channel://pear.horde.org/Horde_NLS-0.0.2" to install
Failed to download horde/Horde_Util within preferred state "stable", latest release is version 0.0.2, stability "alpha", use "channel://pear.horde.org/Horde_Util-0.0.2" to install
Did not download optional dependencies: horde/Horde_Prefs, use --alldeps to download automatically
horde/Horde_Kolab_Format requires package "horde/Horde_DOM" (version >= 0.1.0)
horde/Horde_Kolab_Format requires package "horde/Horde_NLS"
horde/Horde_Kolab_Format requires package "horde/Horde_Util"
horde/Horde_Kolab_Format can optionally use package "horde/Horde_Prefs"
No valid packages found
install failed

The required packages must be installed manually first:

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc  install channel://pear.horde.org/Horde_DOM-0.1.0

downloading Horde_DOM-0.1.0.tgz ...
Starting to download Horde_DOM-0.1.0.tgz (4,256 bytes)
.....done: 4,256 bytes
install ok: channel://pear.horde.org/Horde_DOM-0.1.0

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc  install channel://pear.horde.org/Horde_Util-0.0.2

Did not download optional dependencies: horde/Horde_Browser, use --alldeps to download automatically
horde/Horde_Util can optionally use package "horde/Horde_Browser"
downloading Horde_Util-0.0.2.tgz ...
Starting to download Horde_Util-0.0.2.tgz (16,603 bytes)
......done: 16,603 bytes
install ok: channel://pear.horde.org/Horde_Util-0.0.2

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc  install channel://pear.horde.org/Horde_NLS-0.0.2

downloading Horde_NLS-0.0.2.tgz ...
Starting to download Horde_NLS-0.0.2.tgz (75,779 bytes)
.................done: 75,779 bytes
install ok: channel://pear.horde.org/Horde_NLS-0.0.2

This time installation should suceed:

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc  install Horde_Kolab_Format-0.9.0.tgz 

Did not download optional dependencies: horde/Horde_Prefs, use --alldeps to download automatically
horde/Horde_Kolab_Format can optionally use package "horde/Horde_Prefs"
install ok: channel://pear.horde.org/Horde_Kolab_Format-0.9.0

All that is missing now is to tell PEAR that it should provide PHP only with one include directory: the one we just setup. Ensure that you replace USER with the name of the current user.

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc config-set php_bin "`~/pear-test/pear/pear -c ~/pear-test/.pearrc config-get php_bin` -d include_path=/home/USER/pear-test/pear/php"

After installation succeeded the unit tests should be run in order to validate the dependencies. If the tests are based on PHPUnit then this tool will have to be installed first:

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc channel-discover pear.phpunit.de

Adding Channel "pear.phpunit.de" succeeded
Discovery of channel "pear.phpunit.de" succeeded

# ~/pear-test/pear/pear -c ~/pear-test/.pearrc install phpunit/PHPUnit

And now we can finally run the last check before releasing the PEAR package:

# cd ~/pear-test/pear/tests/Horde_Kolab_Format/Horde/Kolab/Format/
# ~/pear-test/pear/pear -c ~/pear-test/.pearrc run-tests -u 

"Gentoo Linux" as an e-book

The book is now available as an e-book, too.

In a way I'm still unhappy that it is not available as a free PDF. But on the other side that would be unfair to the amount of work invested by the publisher. It actually helps a technical guy like me to get the message across if there are people knowledgeable about writing helping in the process. I'm at least not that much into code that I am unable to judge the quality difference between what I can write and what I got back from OpenSourcePress.

Still, I retain the hope that it will be as free as the software I write at some point in the future.

Thursday, July 03, 2008

Installing PEAR packages from CVS

A short note illustrating how to install PEAR packages from CVS:

cvs -d :pserver:cvsread@cvs.php.net:/repository login
(PASS: "phpfi")

cvs -d :pserver:cvsread@cvs.php.net:/repository checkout pear/PEAR_Command_Packaging

pear package package.xml

pear install PEAR_Command_Packaging-0.13.tgz

Thursday, June 05, 2008

Diffing between branches using CVS

Since I had to search a little bit to find the correct line of code to get a sane diff between branches when using CVS I better note it down here:

cvs diff -kk -r BRANCH -r HEAD
The main point was to avoid the keyword substitution here.

Monday, June 02, 2008

layman-1.2.0 has been released

Finally another layman release. The list of open bugs accumulated during the past half year was rather small. So there is not much to say about it.

The most notable change is probably the changed default storage location. It has been switched from /usr/portage/local/layman to /usr/local/portage/layman. /usr/portage/local was an older location for overlays and using /usr/local/portage is advised nowadays.

The complete list of resolved bugs:

  • Added use flags for pulling in version control systems as a dependency (#168203)
  • Added umask handling (#186819)
  • Modified storage location and provided empty default make.conf (#219786)

And somebody provided an ebuild for layman bash completion. I'm going to take a look at that one soon.

Sunday, May 18, 2008

Using puppet on Gentoo

Puppet is a tool for managing your system configuration. It provides a complete language for expressing and realizing system settings. After some introductory words this post will focus on a Gentoo specific puppet module for managing package installations.

If you have no clue about puppet you might wish to read the introduction if you are interested in managing the configurations of your system in an efficient way. The discussion about the gentoo specific module will only be of interest to you if you already know the basics of writing puppet modules.

Introduction

What are the advantages of using puppet rather than editing all files in /etc by hand?

  • Using puppet means you create a repository of your configuration knowledge
  • You can replicate all of or part of the settings to another host
  • In addition you can version control and share your knowledge in a repository

Mind you: If you are only managing a single host you might not find much value in the items listed above. Indeed puppet only becomes useful if you really wish to apply a complex configuration over many hosts.

But of course this is true for any groupware server and in particular the Kolab Server. Porting Kolab to Gentoo is a project I have been working on for more than three years now.

The initial version (Kolab2/Gentoo-2.1) failed to make me really happy. One central reason for that has been the configuration tool provided by Kolab. While it works fine for the original version of the Kolab Server it simply fails to cope with the amount of options users have on Gentoo.

I always wanted to merge my own crappy tool for configuration management with the code from the Kolab Server. But a kind anonymous voice answered to the blog post linked in the previous setence, telling me that this is a stupid idea and I should use puppet. He was right.

So I'm establishing the Kolab2/Gentoo groupware server configuration based on puppet at the moment. As this includes generating some Gentoo specific modules for puppet it is now time to stop the introductory words and get down to some puppet code.

Installing packages for generic distributions

In order to tell puppet that you wish to have a single package installed you would use a construct like this:

package { openldap:
  ensure   => 'latest',
}

This works fine on most distributions but on Gentoo you might ask about support for use flags, keywords and masking.

Installing packages on Gentoo

My solution is the puppet module os_gentoo.

This module is mainly concerned with management of the files/directories you find at /etc/portage/package.* in your Gentoo system. In order for puppet to manage these paths it makes sense to convert these into directories.

The module provides four central parts:

  1. Backup of the original contents of /etc/portage/package.* if these were files.
  2. Converting the paths into directories.
  3. Restoring the original file contents as /etc/portage/package.*/package.*.original.
  4. Providing functions to easily manage use flags, keywords and masking for other packages.

Backup of /etc/portage/package.*

If the user managed /etc/portage/package.* as files we need to grab the content and store it. Puppet provides the file() function for that but that function will fail if it sees a directory. So we need to determine if the path already is a directory. We need to write some ruby code at this point and create a new fact:

# Determine if these are regular files
 
package_use = '/etc/portage/package.use'
 
Facter.add('use_isfile') do
  setcode do
    if FileTest.file?(package_use)
      true
    else
      false
    end
  end
end

...

Facts are little pieces of system information that puppet determines automatically using the tool dev-ruby/facter. The code given above checks if /etc/portage/package.use is a file and places that information in the variable use_isfile. We will shortly meet that variable again.

This fact is something we store as a plugin at os_gentoo/plugins/facter/portage_dirs.rb within the module.

The code actually performing the backup is packaged in a puppet class:

# Class gentoo::etc::portage::backup
#
# Stores user settings in the /etc/portage/package.* files.
#
# @author Gunnar Wrobel 
# @version 1.0
# @package os_gentoo
#
class gentoo::etc::portage::backup
{
  if $use_isfile {
    $use = file('/etc/portage/package.use')
  } else {
    $use = false
  }
  if $keywords_isfile {
    $keywords = file('/etc/portage/package.keywords')
  } else {
    $keywords = false
  }
  if $mask_isfile {
    $mask = file('/etc/portage/package.mask')
  } else {
    $mask = false
  }
  if $unmask_isfile {
    $unmask = file('/etc/portage/package.unmask')
  } else {
    $unmask = false
  }
}

Here we meet the variables again. In case $use_isfile is true the file contents will be parsed into $use. Otherwise the variable is set to false. We return to our backup two sections further down.

Converting /etc/portage/package.* into directories

Now that we have saved the file contents we can safely convert the files into directories. Puppet would not destroy the original files but instead store them in an archive. But recovering them from there would be cumbersome for the user. Automating the conversion seems to be a better solution.

Requiring a path to be a directory is easy in puppet:

# Class gentoo::etc::portage
#
# Ensure that all /etc/portage/package.* locations are actually
# handled as directories. This allows to easily manage the package
# specific settings for Gentoo.
#
# @author Gunnar Wrobel 
# @version 1.0
# @package os_gentoo
#
class gentoo::etc::portage
{
  # Check that we are able to handle /etc/portage/package.* as
  # directories
 
  file { 'package.use::directory':
    path => '/etc/portage/package.use',
    ensure => 'directory',
    tag => 'buildhost'
  }
 
  file { 'package.keywords::directory':
    path => '/etc/portage/package.keywords',
    ensure => 'directory',
    tag => 'buildhost'
  }
 
  file { 'package.mask::directory':
    path => '/etc/portage/package.mask',
    ensure => 'directory',
    tag => 'buildhost'
  }
 
  file { 'package.unmask::directory':
    path => '/etc/portage/package.unmask',
    ensure => 'directory',
    tag => 'buildhost'
  }
}

Again the four actions have been packaged into a single puppet class. The different actions all have a buildhost tag. This is only required if you really use a build host structure with your servers and plays no role otherwise.

Restoring the original /etc/portage/package.*

Now that puppet converted /etc/portage/package.* to directories we lost the original file contents. Another class will rescue them:

# Class gentoo::etc::portage::restore
#
# Restores user settings from the /etc/portage/package.* files.
#
# @author Gunnar Wrobel 
# @version 1.0
# @package os_gentoo
#
class gentoo::etc::portage::restore
{
  if $gentoo::etc::portage::backup::use {
    file { '/etc/portage/package.use/package.use.original':
      content => $gentoo::etc::portage::backup::use,
      tag => 'buildhost'
    }
  }
  if $gentoo::etc::portage::backup::keywords {
    file { '/etc/portage/package.keywords/package.keywords.original':
      content => $gentoo::etc::portage::backup::keywords,
      tag => 'buildhost'
    }
  }
  if $gentoo::etc::portage::backup::mask {
    file { '/etc/portage/package.mask/package.mask.original':
      content => $gentoo::etc::portage::backup::mask,
      tag => 'buildhost'
    }
  }
  if $gentoo::etc::portage::backup::unmask {
    file { '/etc/portage/package.unmask/package.unmask.original':
      content => $gentoo::etc::portage::backup::unmask,
      tag => 'buildhost'
    }
  }
}

For each of the four paths the original backup variable (e.g. $gentoo::etc::portage::backup::use) is checked for content. We need to use the full class path here to access the variable content. If it contains content it will be written to the corresponding new path (e.g. /etc/portage/package.use/package.use.original).

Handling /etc/portage/package.* with puppet

Now the management of /etc/portage/package.* becomes easy as puppet can place new files for every package or set of packages that requires special use flags, keywords or masking.

This is an example for the use flags:

# Function gentoo_use_flags
#
# Specify use flags for a package.
#
# @param context A unique context for the package
# @param package The package atom
# @param use The use flags to apply
#
define gentoo_use_flags ($context = '',
                         $package = '',
                         $use = '')
{
 
  file { "/etc/portage/package.use/${context}":
    content => "$package $use",
    require => File['package.use::directory'],
    tag => 'buildhost'
  }
 
}

The function takes a context which must be unique and will be used as path component. In addition the package atom needs to be specified including the use flags to be set. Puppet will then create a new file within /etc/portage/package.use using the file type (This is something different than the file function mentioned above).

The only new thing here is the require argument that specifies that puppet must ensure that the file operation with the name package.use::directory has been executed before creating this new file. In other words we ensure that /etc/portage/package.use is indeed a directory.

Managing package installations on Gentoo

Taking all these definitions together we can now express a package installation in the following way:

# Package installation
  case $operatingsystem {
    gentoo:
    {
      gentoo_unmask { openldap:
        context => 'service_openldap',
        package => '=net-nds/openldap-2.4.7',
        tag => 'buildhost'
      }
      gentoo_keywords { openldap:
        context => 'service_openldap',
        package => '=net-nds/openldap-2.4.7',
        keywords => "~$keyword",
        tag => 'buildhost'
      }
      gentoo_use_flags { openldap:
        context => 'service_openldap',
        package => 'net-nds/openldap',
        use => 'berkdb crypt overlays perl ssl syslog -sasl',
        tag => 'buildhost'
      }
      package { openldap:
        category => 'net-nds',
        ensure => 'latest',
        require => [ Gentoo_unmask['openldap'],
                       Gentoo_keywords['openldap'],
                       Gentoo_use_flags['openldap'] ],
        tag => 'buildhost'
      }
    }
    default:
    {
      package { openldap:
        ensure => 'installed',
      }
    }
  }
}

The example installs the experimental net-nds/openldap-2.4.7 package. We differentiate between Gentoo and other distributions using the $operatingsystem variable automatically provided by puppet.

Of course the Gentoo installation looks much more complex than the standard installation on other systems but we have a lot more flexibility on Gentoo. And the idea of the module is to allow us to use this flexibility within puppet.

The first three sections (gentoo_unmask,gentoo_keywords, and gentoo_use_flags) handle the settings in /etc/portage/package.* and the actual installation happens in the fourth section. We use the standard package type here but require that all the settings in /etc/portage/package.* have been executed before puppet runs emerge

A final note on the variable $keyword that is being used in the section above. This is another fact that prevents us from specifying keywords like ~x86 while we actually want ~amd64. It simply reads ACCEPT_KEYWORDS and assumes that the user has the stable keyword selected there. This probably still needs fixing.

Conclusion

It is not too difficult to map the full power of package installations on Gentoo into the puppet way of installing packages. I'm pretty certain that some of the methods I implemented in os_gentoo are still bound to evolve and do not yet represent the best way of handling installations on Gentoo. The module does for example not solve any of the issues mentioned on the Gentoo page in the puppet wiki. So there is still work to be done.

But for now I'm happy to have the central aspects of use flags, keywords and masking available within puppet.

Thursday, May 15, 2008

A first positive experience with ruby: Patching puppet

So far I didn't have much experience with ruby. The few lines of code I've written in that language reminded me too much of perl. And I'm not really a fan of the perl syntax. But today ruby managed to convince me in the area of unit testing.

The problem

I'm bound to stick to ruby as I decided that ruby-based puppet will provide a central element of the next Kolab2/Gentoo version. While it provides some nice LDAP integration features these are not quite sufficient for Kolab. Puppet can grab some host parameters from LDAP and integrate these into the host configuration. The problem for Kolab2/Gentoo is the limitation to some LDAP parameters. Actually these have to be real LDAP attributes that have been defined in a schema.

As I have already argued on the Kolab mailing list it does not make much sense to define attributes in a schema if you want to use such parameters for configuration of a large set of possible applications (postfic, openldap, cyrus, ...). In this case it makes more sense to use the approach also used by the Horde LDAP schema: specifying a single attribute that uses a string value to specify parameters with arbitrary names. E.g. ldapAttribute:"one=two" in order to define parameter "one". Only the "ldapAttribute" will have to be defined in a schema while the code using this parameter handles converting the string into the final paramter.

I wrote a short patch for puppet to implement this. After a short while I got a positive response but the patch was considered insufficient as it lacked any tests.

A simple solution

I admit I was slightly worried because learning to handle just another test framework in a language I have nearly no clue about was something I did not fancy at all. And that was the first really positive surprise about ruby: Using the test framework and successfully writing unit tests in it was a matter of half an hour. Even though it required mocking the LDAP connection.

The testing allowed me to reconsider my expectations concerning the patch and to fix a problem of my initial version. I submitted the new version shortly afterwards and hope it will find its way into the repository now.

Well done, ruby. Let me see what else you can do in order to convince me that you are indeed a good thing...

Wednesday, May 14, 2008

app-admin/pardalys was created in the Kolab overlay

If you look at the current ebuild you might wonder what the fuss might be about... It is a pretty empty package.

But the p@rdalys project will form the core for Kolab2/Gentoo-2.2. It will certainly replace net-mail/kolabd and might include some other packages, too.

The idea is to allow you to install Kolab2/Gentoo-2.2 with two simple steps:

emerge app-admin/pardalys
pardalys

Of course there is still a certain way to go until it will actually work that way. And this easy setup is actually just meant as a nice side effect and is not the main point of starting the project. I'll start explaining this package in greater detail once I push more code into it.

For now the link to the project page will be all I can provide.

Currently it might not be clear what the package will actually be about but if people wish to contribute to the project at a later time point you should go visit the git repository on GitHub. This git repository should serve as a scratch repository used for easy sharing and patching of the code. The reference repository on the other hand will be kept in subversion on SourceForge and will be used for packaging.

More on the whole story once there is more code.

Wednesday, April 30, 2008

Gentoo on a 1&1 vServer

Last update: 2008/04/30

Companies like 1and1 and Strato offer virtual servers based on the Virtuozzo virtualization technology. While these machines are quite cheap and provide a full linux work environment they run SUSE by default. Not my favorite linux distribution...

I was pretty certain that I could also switch the server to Gentoo. But when I asked the customer support they told me that they have no one running Gentoo on any of these machines. And that they would have no clue if that could work.

So I tried and it is definitely possible. Just in case there are others who would like to have a Gentoo vserver on a Virtuozzo system this HowTo will provide some instructions on how to achieve that.

Do I need to give the usual warnings? You'll completely wipe the old system and if something does not work, you will have to reinitialize the server. If you don't want to take that risk, do not continue.

Cleaning up

First you will have to log into your "Virtuozzo Power Panel" in order to switch the system into repair mode. The original system now resides in /repair and you work in a safety mode.

Now log into your system via ssh and make a backup copy the old /etc/mtab (this helps to have a working df command at a later time point, reported by Gian):

     
cp /repair/etc/mtab /root/mtab.old                                                                                                                                                 

Now remove the old suse system:

                                                                                                                                                              
cd /repair                                                                                                                                                                         
rm -rf *                                                                                                                                                                           

In case this results in a failure your repair directory might be mounted as read-only (reported by Ulrich):

                                                                                                                                                              
mount -o remount,rw /repair                                                                                                                                                        

Install the basic Gentoo system

Now (still in /repair) start to download the stage and a portage snapshot from your nearest mirror:

                                                                                                                                                              
wget ftp://linux.rz.ruhr-uni-bochum.de/gentoo-mirror/experimental/x86/vserver/stage3-i686-20060317.tar.bz2
wget ftp://linux.rz.ruhr-uni-bochum.de/gentoo-mirror/snapshots/portage-latest.tar.bz2
tar xvjpf stage3-*.tar.bz2                                                                                                                                                         
tar xvjf portage-*.tar.bz2 -C /repair/usr                                                                                                                                          
rm stage3-*.tar.bz2 portage-*.tar.bz2                                                                                                                                              

The basic tools are now in place. Next we need the original network information:

                                                                                                                                                              
cp /etc/resolv.conf /repair/etc/                                                                                                                                                   

In addition copy the original mtab back into place:

                                                                                                                                                              
cp /root/mtab.old /repair/etc/mtab

And now we can chroot into the new Gentoo environment:

                                                                                                                                                              
mount -t proc proc /repair/proc/
mount -o bind /dev /repair/dev
chroot /repair                                                                                                                                                                     

Time to fix the timezone information and sync the portage tree:

                                                                                                                                                              
env-update                                                                                                                                                                         
source /etc/profile                                                                                                                                                                
export PS1="(chroot) $PS1"                                                                                                                                               
cp /usr/share/zoneinfo/Europe/Berlin /etc/localtime                                                                                                                                
emerge --sync                                                                                                                                                                      

Set a root password:

                                                                                                                                                              
passwd                                                                                                                                                                             

Please note that this password becomes your new master password for the server!

Optional: Configure a build host

The vServers are not the most powerful machines and they definitely benefit from pulling packages from a central build host. If you have such a machine you should complete your /etc/make.conf with the following variables:

PORTAGE_BINHOST="http://buildhost.example.com/packages/i686/All"
SYNC="rsync://buildhost.example.com/portage"
EMERGE_DEFAULT_OPTS=" --usepkg --getbinpkg --getbinpkgonly"

Move to baselayout2

The old baselayout-vserver probably still works but the newer baselayout2 also copes for vServers and I recommend to use it.

First we should ensure that we link to the current Gentoo profile:

                                                                                                                                                              
rm /etc/make.profile
ln -s ../usr/portage/profiles/default-linux/x86/2007.0 /etc/make.profile

Now we unmask the newer baselayout and the OpenRC package:

                                                                                                                                                              
echo "sys-apps/baselayout ~x86" >> /etc/portage/package.keywords
echo "sys-apps/openrc ~x86" >> /etc/portage/package.keywords

In case the kernel of the system underlying your virtual server is somewhat older, you should also ensure that you do not use the newer glibc-2.4 and that nptl is disabled:

                                                                                                                                                              
echo ">sys-libs/glibc-2.5-r4" >> /etc/portage/package.mask
echo "sys-libs/glibc -nptl -nptlonly" >> /etc/portage/package.use
Time to update the system:
emerge -uND world

Configure Gentoo as a virtual server

Now you can configure the network:

                                                                                                                                                              
emerge iproute2                                                                                                                                                                    
cd /etc/init.d                                                                                                                                                                     
rm net.eth0                                                                                                                                                                        
ln -s net.lo net.venet0                                                                                                                                                            
rc-update add net.venet0 default                                                                                                                                                   
rc-update add net.lo default                                                                                                                                                       

You will need to provide a static definition of your network parameters in /etc/conf.d/net. In order to determine the necessary parameters, follow the steps below:

                                                                                                                                                              
# ip addr                                                                                                                                                                          
326: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue                                                                                                                               
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00                                                                                                                          
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo                                                                                                                             
327: venet0: <BROADCAST,POINTOPOINT,NOARP,UP> mtu 1500 qdisc noqueue                                                                                                         
    link/void                                                                                                                                                                      
    inet 127.0.0.1/32 scope host venet0                                                                                                                                            
    inet 87.123.45.123/32 scope global venet0:0                                                                                                                                    

From the output note the ip of the venet0 adapter. Here it is 87.123.45.123.

Now you need the routing information:

                                                                                                                                                              
# ip route                                                                                                                                                                         
191.255.255.0/24 dev venet0  scope link                                                                                                                                            
127.0.0.0/8 dev lo  scope link                                                                                                                                                     
default via 191.255.255.1 dev venet0                                                                                                                                               

The necessary parameters are the first netmask and the default gateway (191.255.255.0/24 and 191.255.255.1).

Adapt the following settings to your specific parameters and echo it into your network configuration file:

                                                                                                                                                              
echo '                                                                                                                                                                             
modules="iproute2"
modules="!ifconfig"

config_venet0="87.123.45.123 netmask 255.255.255.0 broadcast 0.0.0.0"

routes_venet0="191.255.255.0/24 scope link
               default via 191.255.255.1"

' >> /etc/conf.d/net                                                                                                                                                         

I am not an expert on the network settings and the proper routing on a vserver but these settings did work for me. Please send me a mail if you have suggestions on how to improve the configuration.

Another comment by Ulrich:

I did put spaces between the config_venet0, routes_venet0 and the equal sign. It's not allowed to do so. Adding this as a don't to your explanation might save an hour or two to some guys out there.

Finally you need to add the ssh server to the default services so that you will be able to log into the system:

                                                                                                                                                              
rc-update add sshd default                                                                                                                                                         

Reboot into Gentoo

Now you should be able to end the repair mode. Log into your Virtuozzo Power Panel, select "Finish repair" and try to log into your vserver via ssh a short while later.

ChangeLog

  • 2008/04/30: Included moving to baselayout2

Tuesday, April 29, 2008

Moving to baselayout2

I finally took the time to move my configuration to baselayout2 and openrc. It was about time since I was still using the old baselayout-vserver packages on my vservers. I admit I was afraid the move would hurt so I waited for a while.

But it was really, really smooth.

I made only one mistake and did not notice that my link to net.lo vanished in the upgrade process. So I was subsequently wondering why ping responded with connect: invalid argument when pinging my own machine. Easy enough to fix.

Excellent work from the baselayout and OpenRC devs. Nice.

Friday, April 25, 2008

The OpenSourceSchool opens its doors

My publisher started with his next endeavor in bringing knowledge to the masses: The OpenSourceSchool. This time it is about spoken words - or courses - rather than written pages bound as books. Many OpenSourcePress authors are offering seminars there.

I would definitely have liked to offer a course about Gentoo there. But I had to agree with them that this would probably not raise enough interest from paying customers. Or am I wrong about that?

But of course there was room for the second topic dear to my heart: Kolab. The course will take five days and touch all major topics of the Kolab Server. Central components such as postfix, openldap, cyrus imap will provide the core components but I'll certainly also include a chapter about getting the Horde web client successfully installed. So we will hopefully have a new batch of Kolab experts in October.

And hopefully the preparations for the course will also help in laying the groundwork for a book about Kolab. This is the only book I still want to write after going through the pain of writing the Gentoo book.

Thursday, April 24, 2008

Another round of Horde bugs...

I'm back to Horde bug fixes and while their CVS server vanished in some kind of limbo I took the time to create a Horde/Kolab project page. Maybe it is a useful overview to the people interested in Horde. I definitely have to update the Kolab wiki, too. But that might still take a while.

Wednesday, March 12, 2008

Sync my Kolab

SyncML support for the Kolab server has been requested for several years now. Supporting it via the modules available within Horde always seemed to be one the of easiest ways to get mobile clients to synchronize with the server. Since the newest Kolab server release candidates now provide Horde, how far is SyncML support away?

Not far at all... Univention contracted p@rdus via the Kolab Konsortium to implement SyncML support within the Kolab server.

Initially a version that would require an additional MySQL database was planned but p@rdus invested some additional time into generating purely IMAP based drivers so that SyncML support will also be available within the next Kolab release (the Kolab server does not use MySQL at all by default).

Today I was able to sync the Blackberry provided by the customer for the first time. Contacts, events, tasks all survived my minimal testing. Of course the same procedure failed once I gave the customer access to the test server...

So right now I'm entering the debugging phase and I'm starting to prepare some scripts so that people eager to try the SyncML support can install an experimental Horde version on an external web server.

Update:

A script for installing horde from CVS is now available. It also installs all the required Kolab patches for SyncML support.

You can fetch and run it like this:

wget http://kolab.org/cgi-bin/viewcvs-kolab.cgi/*checkout*/server/horde/external-horde-cvs.sh
chmod u+x external-horde-cvs.sh
./external-horde-cvs.sh

Tuesday, March 04, 2008

Flow control in screen

A reminder for myself: Second time that my emacs session within screen suddenly didn't respond to Ctrl+s anymore. The command would somehow be piped through to bash directly and thus send a STOP signal. Unlocking the screen with Ctrl+q was easy but the new Ctrl-s meaning effectively killed my save-buffer command in emacs. While I noticed that I had hit some incorrect key combination I was not exactly certain which one. This happened before and I didn't have the time to check for the origin of the problem last time. This time I found the required hint. I must have hit Ctrl-a f accidentally thus activating flow control.

Friday, February 29, 2008

I really don't want to force you...

Skin the onion

"... no pressure whatsoever ...". Isn't that how it always starts?

Initially I intended to head over to the Chemnitzer Linuxtage for a chat here and there and maybe a beer or two. Just for fun.

But when I realized that there are still some possibilities for giving a short overview on a Gentoo-specific topic, I told Tobias (dertobi123) that I could give a talk on the Kolab2/Gentoo project. That is definitely still within the fun area of things.

What I did not expect was that I would get called with the request to jump in for one of the speakers in on of the main tracks who got sick ... no pressure whatsoever ... sure. Today is going to be hectic...

Thursday, February 28, 2008

Ups... time for recovery

051211_test_d50_2.jpg

Hrm,... no clue how I managed to break my Gallery installation. I probably played around too much with gallery2flickr. Though maybe I killed it when bumping and testing the ebuild last time.

Anyhow it must have been a while ago so I actually had to ask dar to go into recovery mode. And since this happens very infrequently I always forgot how to select a sub-path of the whole archive dar creates.

Here is my reminder:

dar --crypto-block 20480 --key MYKEY -x 01-18-2008_0358 -g localhost/htdocs/gallery

Hello again, my dear Gallery... Backups are indeed a nice thing.

Wednesday, February 27, 2008

Multiply your knowledge

Gentoo Linux

I've waited since Friday and it finally arrived today. Another German Gentoo book is available now!

It concentrates on the experienced Linux user and tries to achieve two things:

  • Get you running on Gentoo if you never used it before
  • Highlight the central Gentoo tools and provide a reference for their usage

For new users it should be possible to grab a laptop, insert the DVD and run through large parts of the book without ever connecting to the net. Thus the first steps with Gentoo are hopefully pleasant.

Even the early chapters feature larger sections that provide in-depth knowledge for the more experienced Gentoo users. All tools, options, variables and concepts are referenced in the thirty index pages of the book. So it should be a good companion while working with Gentoo.

If you need more details you can check the table of contents or even read the chapter on writing ebuilds.

One of the later chapters - "Extending Gentoo" - can be considered the main origin of the whole story. It talks about overlays.gentoo.org and layman.

About two years ago I sat down on a weekend to code the basis for layman in order to make the use of overlays.gentoo.org as easy as possible. I certainly didn't expect this to have any significant effect. After all layman always was - and still is - a rather trivial script.

But since I liked the whole idea about overlays I decided to write a small article about this concept as well as how layman fits into that. This got published in the German Linux magazine and had one unexpected result: I got an email from OpenSourcePress asking whether I'd like to write a whole book about Gentoo. At that time the answer was "yes".

Would it be "no" today? I'm not certain. I have to say that I hated the three weeks of pure text editing in the final phase of the project. It reminded me far too much of my PhD thesis. Yes, I like writing stuff: emails, wiki pages, blog entries, source documentation, ... you name it. Small stuff. Epic texts turn out to be much harder.

What definitely made it bearable in this case were the systems provided by OpenSourcePress: They give you a subversion repository and the whole text is done in Latex. They also work with Emacs on their end which happens to be my favorite editor too.

And one thing about their subversion repository is really great: You commit crudely written techno-babble on your side and a few revisions later it comes back in well readable German. This is what I definitely liked most about the whole project: Getting rewritten to readable language. Big kudos to the team of editors which really does great job.

So would I do it again? Well, I don't have to decide on that anymore. The basis is there so all there will be are further revisions. And that will hopefully be easier than the first version.

But for now I'll keep the book closed and continue coding...

Horde: Synching with HEAD

Feels good to be finally back working on Horde. Last week saw the generation of Kolab patches for Horde-3.2-RC2 and this week I finally synchronized my work environment with Horde CVS. The first Kolab commit to Horde CVS went in yesterday. It felt like the last one was ages ago.

Anyhow there are more commits ahead. Now that most parts of Horde work with Kolab the time for the second round of coding is approaching fast: restructuring and optimization. There is still a lot that needs to be done and I'm desperately waiting for Horde 4 to finally restructure the whole Kolab module and get it on a hopefully sane path for the future.

Thursday, February 14, 2008

Python egg fun

Now that I'm back working on some of my Python packages it was time to look at Python package management again. So far I just used the embedded distutils package but apparently "setuptools" is the thing that starts to be used widespread. So I looked at this in more detail and will summarize some notes here.

"setuptools" does actually not deliver too much fancy new functionality. The main benefit lies in the area of dependencies and the creation of distributable packages.

I basically used something like this as setup.py when I only used distutils:

from distutils.core import setup

PACAKGES = ['libpardus',        
            'libpardus.configs',
            'libpardus.utils',
            'libpardus.web']

import sys
sys.path.insert(0, './')             
from libpardus.version import VERSION

setup(name          = 'libpardus',
      version       = VERSION,                
      description   = 'p@rdus Python Library',
      author        = 'Gunnar Wrobel',
      author_email  = 'p@rdus.de',                       
      url           = 'http://libpardus.sourceforge.net',
      packages      = PACKAGES,
      license       = 'GPL',
      **extra
      )

Now, with setuptools things do not become much more complicated:

try:                                           
    from setuptools import setup, find_packages
except ImportError, e:              
    from distutils.core import setup
    extra = {}              
    PACAKGES = ['libpardus',        
                'libpardus.configs',
                'libpardus.utils',
                'libpardus.web']
else:            
    extra = dict(           
        install_requires = [
            'setuptools',
            'web.py',        
            'zope.interface',
            ],            
        extras_require = {
            'WEB': ['web.py'],
            }
        )                     
    PACKAGES = find_packages()

import sys
sys.path.insert(0, './')
from libpardus.version import VERSION
                                  
setup(name          = 'libpardus',
      version       = VERSION,                
      description   = 'p@rdus Python Library',
      author        = 'Gunnar Wrobel',
      author_email  = 'p@rdus.de',                       
      url           = 'http://libpardus.sourceforge.net',
      packages      = PACKAGES,
      license       = 'GPL',
      **extra
      )

I'm mainly using find_packages() to generate the package list and add some packages as basic requirements. In order to make this also work on "non-setuptools" system the whole is embedded in a try: ... except: ... statement.

The whole thing is not yet complete since I don't know every detail of "setuptools" yet, but the main point here is that things are not too different if you decide to upgrade from "distutils".

I also created a setup.cfg-file to make the creation of snapshot and release packages easier:

[egg_info]
tag_build = .dev
tag_svn_revision = 1                           

[aliases]
release  = egg_info -RDb ''
relpatch = egg_info -db ''  

Running python setup.py bdist_egg will now create an egg that I can use for testing purposes. It gets the current subversion revision attached to the file name (e.g. libpardus-0.9.2.dev_r38-py2.4.egg. Because of the alias definition

release  = egg_info -RDb ''

the command python setup.py release sdist will create a release source package (e.g. libpardus-0.9.2.tar.gz). The relpatch command is for releasing a patch level package after the main release.

But during development it is really nice to produce these testing eggs without the need to build/install the tools every time.

"setuptools" also makes it easy to handle foreign eggs. This way you can build whole test applications without modifying your site-wide python library. This can be done using the easy_install tool:

easy_install -zmaxd lib/ web.py

This way you would place web.py as a packaged egg within the lib directory.

Within a python library you can the either directly include the full egg filename in the sys.patch:

sys.path.insert(0, 'lib/web.py-0.23-py2.4.egg')

Using setuptools you can also make this:

from pkg_resources import require
require("web.py")

The details can be found on the setuptools homepage.

I'll probably continue this once I learn how to upload python packages to PyPI, the python package index.

Friday, February 08, 2008

SVN backup using SVN::Mirror

I do have a number of subversion repositories that I like to backup once every hour. The tool I'm using for that is the perl package SVN::Mirror. I didn't document the usage of this tool for myself yet and I feel it is time to correct that.

The tool itself is available on CPAN and on Gentoo you can install it via Portage:

gentoo # emerge dev-perl/SVN-Mirror

This delivers the tool /usr/bin/svm that holds all the functionality to replicate Subversion repositories.

I always had strange locking issues with this perl script so I added a small section for the occasional unlocking:

--- /usr/bin/svm        2007-06-21 15:17:55.000000000 +0200
+++ /usr/bin/svm-expanded       2008-01-07 14:59:54.000000000 +0100
@@ -82,6 +82,17 @@
        m/connection timed out/;
 }
 
+sub unlock {
+    my $path = shift;
+    my $what = shift;
+    my $pool = SVN::Pool->new_default;
+    my $m = SVN::Mirror->new(target_path => $path, target => $repospath,
+                            pool => $pool, auth => $auth,
+                            get_source => 1);
+
+    $m->unlock($what);
+
+}
 
 sub sync {
     my $path = shift;

To backup a repository you will first have to initialize a new repository using svm:

gentoo # export SVMREPOS=/var/svn/backup/libpardus
gentoo # svm init / https://libpardus.svn.sourceforge.net/svnroot/libpardus

Updating the repository to the newest state is not much more complex:

gentoo # export SVMREPOS=/var/svn/backup/libpardus
gentoo # svm sync /

If you use the unlocking patch provided above this makes:

gentoo # export SVMREPOS=/var/svn/backup/libpardus
gentoo # svm unlock / force
gentoo # svm sync /

I package these few lines into a small shell script and add it to cron so I get a regular update on all the repositories. Just in case Sourceforge ever dies (which I hope it does not).

libpardus: Getting another python project back on track

libpardus is another one of my python projects that I hope to get revived soon. It provides a collection of different utilities that I require for the other python tools I wrote. Right now I'm just busy getting the project basics updated and this is a first blog post for the tool.