Tuesday, December 13, 2011

The Horde release train

The Horde project did push out 906 releases in the last 8 months since the initial Horde 4 release. PEAR packages, better release management tools and continuous integration seem to pay off. The stream of bug fixes and improvements available to the users has switched to high speed.

At the same the time the use of PEAR packages and automatic DB migrations has lowered the effort of updates on the administrator side to an absolute minimum.

pear upgrade -c horde

And few clicks later you are up-to-date.

The code quality of the basic PEAR tools is an entirely different matter... BUT it is just soo damn cool anyway ;)

Sunday, November 13, 2011

Accessing PEAR server information with PHP

The number of PEAR servers has increased dramatically in recent times. Maybe due to the availability of Pirum - a simple PEAR server. Whatever the cause: there are plenty of package lists and package datasets available on the net these days. Accessing this data is not always easy though.

PEAR servers provide a well defined REST API that is usually queried using the PEAR toolset available via the basic PEAR package. That package however does not provide anything that I would consider to be a decent developers API.

So let me provide you with an alternative that is available via the Horde framework libraries: The Horde_Pear library.

The package comes with the Horde_Pear_Remote class wich provides you with high-level access to the REST interface of a PEAR server.

Creating an instance of this class without providing any arguments to the constructor will allow you to access the PEAR server at pear.horde.org.

$pear = new Horde_Pear_Remote();
print(join("\n", $pear->listPackages()));


Alternative servers can be specified in the first argument to the constructor:

$pear = new Horde_Pear_Remote('pear.phpunit.de');
print(join("\n", $pear->listPackages()));


The full set of the functionality provided by Horde_Pear_Remote is detailed on our website.

Friday, November 04, 2011

The library section of the Horde website supports component documentation now.

Our new libraries section has been started a while ago to further push the PHP components we offer. Now this section supports publishing the component documentation as well. You can take a look at the documentation of the Cli_Modular package for example. In fact: it is pretty much the only package that fully uses the new system already. So there is still work ahead.

If you use any of the components: We are grateful if you start writing a bit of documenation or describe some examples on how to successfully use the package. It will help us and others tremendously. Thanks!

Let me try to explain how the system is currently intended to work - feedback and critical comments of course welcome:

  1. When creating new documentation for one of the Horde components you start a new section on the developer documentation of our wiki. See the Horde_Cli_Modular link below Library components for example. Please note: It is not mandatory to write the documentation in the wiki. You can have static documentation files within a component as well. This is just meant as an option for those situations where it makes sense to work on documentation files together with people that do not have direct git commit access. Hopefully there will be many component consumers interested in updating and fixing the documentation.
  2. How you structure that new section is up to you. For a simple package you might wish to keep all documentation in a single file but for the more complex one it might make sense to have several files. In that case the new section page should probably just be a link list to the various wiki pages that document the component. In case of Horde_Cli_Modular I used a single page.
  3. Once the documentation has been written in the wiki format it is time to download it into the doc directory of the component. The horde-components helper is what you should use for that operation. The tool expects to find a DOCS_ORIGIN file within the doc or docs (for the applications) directory of the component. This file must conform to the reStructuredText format and map remote URLs to local file paths relative to the component root. The DOCS_ORIGIN from Cli_Modular links the wiki page exported as reStructuredText to the path doc/Horde/Cli/Modular/README. The links can of course point anywhere so you are in principle free to use any remote source. But you should ensure the page provides readable reStructuredText.
  4. Now you can run horde-components fetchdocs and it should fetch the URLs into the respective local files. You can then run horde-components update to refresh the file list in the package.xml file if the documentation files were not included before.
  5. Once a component was released we can now run something like this: horde-components Horde_Cli_Modular webdocs -D ~/git/horde-web/ --html-generator=git/horde-support/maintainer-tools/docutils/html.py --allow-remote in order to update the documentation on the website. Right now this does not happen automatically during a release but I plan to add this soon.

There are of course still a few problems with this approach:

  • The reStructuredText exporter of our wiki engine "Wicked" is not yet complete. You may experience problems with the export if you use constructs it does not know yet. Please ping me if you experience problems with the export.
  • The wiki syntax and the syntax required by reStructuredText is not 100% aligned. You can write a wiki page that looks just fine but leads to errors or formatting problems in the reStructuredText export. After creating / updating a new wiki documentation page it make sense to run the horde-components fetchdocs/horde-components webdocs sequence to check for problems.
  • We still have some components with documentation files not in reStructuredText format. These will result in non-existing links on our website for now but I plan to fix the few faulty packages soon.

Jenkins on ci.horde.org got a major update

The Horde continuous integration setup saw a major upgrade and a number of fixes during the last week.

The take home message for everyone that does not wish to read the full log: ci.horde.org will now run horde/framework/bin/test_framework after a new commit has been pushed. This reduces the load on the system significantly and you will get results faster. If you develop Horde code you should run horde/framework/bin/test_framework as well in order to check if your recent commits could cause failures on our continuous integration server.

The full recap of the recent changes:

  • Update to the newest version of the Horde components tool. This allows to use the improved templating system.
  • New configuration and build templates that allow to reduce the job build time by avoiding to rebuild the set of package dependencies if this is not necessary. This has an important implication: A code change in one package (e.g. Horde_Imap_Client) will be tested against unchanged dependencies (e.g. Horde_Mime). Even if the commit also touched one or several of the dependencies (e.g. Horde_Mime). The packages should be backward compatible - so this should result in no error. The dependencies of one package will only get updated in case the dependency list in the package.xml of that package changes.
  • Running horde/framework/bin/test_framework has been integrated into the horde-git job. With the new "rebuild dependencies only when necessary" policy (see above) the component jobs do not fully test the integrity of the latest commit anymore. When the dependencies do not get updated the focus of shifts a bit more towards checking for backward compatibility. In order to not loose the integrity check for the "bleeding edge" our test_framework script is now executed right after updating the code from git.
  • Running test_framework also allows to run component jobs only if the code of the component has actually been touched. This significantly reduces the load on ci.horde.org as it removes the need for rebuilding all components for each and every commit.
  • An update to the newer CodeSniffer which included an update to the checklist for the Horde style. The new ruleset is available from our horde-support repository.
  • A customizable ruleset for the PHP mess detector has been added as well. We still need to tweak the exact configuration of PMD to match it with what we consider reasonable defaults for Horde code.
  • DocBlox has been added to allow generating experimental API documentation. DocBlox is significantly faster and less memory hungry than PHPDocumentor. It has already been adopted by large frameworks such as Zend. So I figured it is worth taking a look at it. Feedback welcome!
  • The Jenkins configuration has been updated with the latest fixes and improvements from the Jenkins-PHP project.
  • The setup procedure has been fixed so that it should be possible to generate a local setup - comparable to ci.horde.org - again.

Tuesday, October 18, 2011

demo.horde.org got updated

You might have noticed that the Horde homepage received a new Demo button within the main navigation recently. We did indeed manage to get a demo machine up and running at demo.horde.org. The installation got updated to the latest releases just yesterday and Ansel - the Horde photo management application - got added into the mix. Feel free to test drive this new application as well as the other components on demo.horde.org.

If you experience any problems please let us know.

In principle you shouldn't be able to break anything on the installation. It is a virtual EC2 machine and we will update the image every once in a while when there are new releases available. Any data you store on the machine will be resetted at that point.

You should also be able to mail to the two demo users demo@demo.horde.org and guest@demo.horde.org. Mailing to the outside is not possible - for obvious reasons.

Friday, October 14, 2011

Horde_Push now supports sending to Blogger.com

Horde_Push supports sending to blogger.com now.

The Horde_Push library is a tool to send content elements to various external services. Such as Twitter, Facebook, Blogger and others. Right now it only all
ows sending Tweets, publishing entries on blogger.com and sending e-mails.

The idea is to allow publishing content you curate on a Horde installation to the social networks out there. At the moment the code is not yet there as the
integration into the base horde package is still lacking. Right now there is only a small command line helper that I use for testing.

Tuesday, June 21, 2011

Anatomy of a Horde test suite - III

Ready for the next item on the test suite agenda? This time the topic is Autoloading. We use a rather simple autoloading setup for most component test suites. It requires no additional setup and works out of the box if you run php AllTests.php.

That is already nice and allows running the complete test suite without further ado. But I must admit that I want more. My default work mode looks like this:

  • open test case (and modify it)
  • hit <f3> <f8> to run phpunit on this single test case
  • hit <f4> <f4> to to jump to the code line that produced the first error

This allows me to add a new test definition and immediately run it so that I can check for problems. And I don't need to execute the full AllTests.php to get feedback on the new test. So I'm annoyed every time I hit a test case that does not allow me to do that. A working autoloading setup is the key for that.

Luckily not only my own preferences make using an Autoload.php file in a test suite attractive. There are a number of reasons why such a file can be useful. The Wiki page for the Horde_Test component details them and this is a copy of the relevant section:

The Autoload.php file is not required in a test suite but it is strongly recommended that you use it. It's purpose is to setup PHP autoloading so that all tests in the test suite automatically have access to all the classes required for executing the tests. The reason why it is not mandatory is that Horde_Test_AllTests already loads a basic autoloading definition that works for most framework components.

This means that running php AllTests.php usually does not hit any autoloading problems. Running a single test case (e.g. phpunit Horde/Xyz/Unit/UnitTest.php) is a different matter though.

The *Test.php files do not extend Horde_Test_AllTests and thus there is nothing that would magically setup autoloading if you try to run such a test suite in isolation. And running single test cases can be quite convenient if the whole test suite would take a long time to execute. Using an Autoload.php file alongside the AllTests.php file is the recommended way to provide a single test case with autoloading and thus enable commands such as phpunit Horde/Xyz/Unit/UnitTest.php. In addition the file is helpful for any case where you need slightly more complex loading patterns or want to pull in special files manually.

Once you created an Autoload.php file for your test suite it will also be heeded by Horde/Test/AllTests.php. The latter will avoid the basic autoloading setup if it detects the presence of an Autoload.php file for the current test suite. That one will be loaded and is assumed to contain the required autoloading setup.

The content of Autoload.php

You should at least require the Autoload.php from Horde_Test in this file. This is also what Horde_Test_AllTests would do when choosing the simple autoloading setup.

require_once 'Horde/Test/Autoload.php';

It also makes sense to adapt the error reporting level to the same standards as required in the AllTests.php wrapper:

error_reporting(E_ALL | E_STRICT);

If you derive your test cases from a central test case definition you should load this one in Autoload.php as well:

/** Load the basic test definition */
require_once dirname(__FILE__) . '/TestCase.php';

Sometimes it makes sense to pull in the definition of test helpers that may be used throughout the test suite. They are usually not available via autoloading and need to be pulled in explicitely:

/** Load stub definitions */
require_once dirname(__FILE__) . '/Stub/ListQuery.php';
require_once dirname(__FILE__) . '/Stub/DataQuery.php';

Real world examples for Autoload.php helpers can be found in the Horde_Date and the Kolab_Storage components.

Within the test cases you only need to load the Autoload.php file which usually looks like this (and obviously depends on the position of the test case in the directory hierarchy of the test suite):

require_once dirname(__FILE__) . '/../Autoload.php';

You'll find additional background information on autoloading within test suite runs on the Wiki page for the Horde_Test component.

Monday, June 20, 2011

Horde4 debian packages - Second round

Nearly half a year passed since I started my first attempt at Debian packages for Horde4. Back then it was just about snapshots which I generated via a continuous integration setup. But Horde4 has been released now, there are packages and it is time for the real thing.

I did get a first set of packages installable today but this is just the initial draft. The main point about it is that the majority of it is automated.

If you want more details and the installation steps then I would suggest to follow my discussion with Mathieu Parent on pkg-php-pear@lists.alioth.debian.org. These were the mails exchanged so far (with the last two detailing my current status):

  1. On PEAR packaging (25.5., Mathieu)
  2. On PEAR packaging (7.6., Gunnar)
  3. On PEAR packaging (7.7., Mathieu)
  4. On PEAR packaging (17.7., Gunnar)
  5. On PEAR packaging (20.7., Gunnar)

There is not much more to say right now. Any testing and feedback is welcome and there is obviously still a lot of work ahead until this pops up in your default package channels. But it is at least on the horizon and the variant that I ship on files.pardus.de should become useable very soon.

Tuesday, June 14, 2011

Anatomy of a Horde test suite - II

This morning I completed the next step on the journey through Horde's test suites and added the description of the AllTests.php file to the wiki page. I am not going to copy the complete text here but instead focus on the use cases for this file as I still have a few question to the audience below.

AllTests.php is the only mandatory requirement for a Horde test suite. Everything else is optional but there has to be an AllTests.php file which serves as an entry point into the test suite.

This is the functionality expected from the file:

  1. It must collect all tests of the test suite.
  2. It must allow to retrieve all tests of the suite via Horde_Xyz_AllTests::suite().
  3. It must allow running the test suite via phpunit AllTests.php.
  4. It must allow running the test suite via php AllTests.php.

The Horde_Test package already delivers a boilerplate AllTests.php class in framework/Test/lib/Horde/Test/AllTests.php and deriving an AllTests.php for a standard test suite becomes rather simple. The full code for this is presented on the wiki page and you can also look at an example from our repository.

Now I wonder if the items listed above are in fact all the requirements we have for this file.

Requirements (1) and (2) are obvious as this is functionality needed for our horde/framework/bin/test_framework helper that runs all framework tests. Though I assume nobody uses this one on a regular basis at the moment.

But I noticed that (3) does not work out of the box with the current PHPUnit. This led to a pull request as it definitely should (and can) work.

I usually run the tests with a rather long command line that ultimately boils down to phpunit Horde_Xyz_AllTests AllTests.php which is tied to a shortcut in Emacs. As the Lisp code I use for that extracts the class name automatically I never noticed that a plain phpunit AllTests.php does not work.

So are most people using php AllTests.php? How do you run the test suites or would like to run them? Can I get some feedback on this (either here, on IRC or via tweet)?

Anything additional I missed about the requirements for the AllTests.php file?

Next in the series will be on autoloading which should allow me to also look at the problems we still have with that in the application components.

Thursday, June 09, 2011

The Horde newsletter again

Time flies and it is soon time for the next issue of the Horde newsletter. Which reminds me to post the link to the last one in case you missed it. You can subscribe to the monthly newsletter here.

As Jan already mentioned in his blog he gave Radio Tux an interview on Horde 4. I just checked by roughly translating half of it to English that I should be able to include a transcript of it in the next newsletter. It delivers a very good overview on all things Horde 4.

Anatomy of a Horde test suite - I

Just got issue 07/2011 of the the German Linux Magazine in the mailbox and on the final page there is this little abstract about 08/2011 saying...

"PHP Unit and Jenkins - There are two things guarding against programming errors: unit tests covering your code and continuous integration systems that automate the testing. The next issue will demonstrate this based on a real example from a PHP web project." [translated from German].

The "PHP web project" is actually named "Horde" and hm... I guess this means I have to write this thing - ;) . When agreeing to the article I immediately knew I wanted to combine it with an overview on how the Horde test suites are arranged. So far we have been lacking a summary in that area and it should help newcomers to the Horde project to get into testing mode as well.

My mind is currently fully tuned to unit testing and code quality and it is amazing how easy it is to write about this. The initial draft for the article already exceeded all limits when it comes to size. Though I got pretty positive feedback on it I will have to leave some stuff out. Those sections should make it to this blog instead so that I can link to it in the article.

Basically I will make this into a short series of blog entries on unit testing in Horde. I will include parts of the Horde_Test overview, personal musings, and stuff related to the article. Let's hope it is useful to some people out there.

Here we go with the introduction to the Horde_Test overview...


The Horde Project has always had high standards when it comes to code quality. Of course these standards evolved with time and also with the progress the PHP community made. The code from IMP-1.0.0 (1998) didn't come with unit tests. And somehow it lacked classes. And there is an awful lot of code mixed with HTML. Somehow this looks horribly like PHP3.

Oh, it was PHP3.

Of course PHP development changed over time and so did the Horde project. Nowadays each and every commit into our repository leads to the automatic execution of thousands of unit tests written by the Horde developers and they check our code for failures. Night and day our continuous integration server broadcasts the current test status to us in particular but also to anyone else interested.

With the release of Horde 4 the test suites of the Horde components available via our PEAR server all show some common patterns. There are certain Do's and Don'ts and a lot of playground in between. Often the Horde_Test component is involved. So it makes sense to associate the overview on the anatomy of Horde test suites with this particular module.

I must admit that I really like the way the Horde project approaches unit testing. There is no way we could be unit test purists which would be too extreme given the fact that the project already exists for more than a decade. But at the same time there was also no one complaining when testing entered the equation. It just felt like continuing to adhere to the quality standards that seem so familiar when it comes to Horde code.

So much for now. More technical stuff to follow soon...

Wednesday, June 01, 2011

Mixing stable Horde components with snapshots

One thing I completely love about Horde 4 being a component framework is the ability to have stable installations into which I can inject experimental snapshots of packages further developed, patched or hacked in some other way. Just an overview of one such installation:

PACKAGE                   VERSION              STATE
Horde_ActiveSync          1.0.0                stable
Horde_Alarm               1.0.1                stable
Horde_Argv                1.0.1                stable
Horde_Auth                1.0.3                stable
Horde_Autoloader          1.0.0                stable
Horde_Browser             1.0.0                stable
Horde_Cache               1.0.3                stable
Horde_Cli                 1.0.0                stable
Horde_Compress            1.0.1                stable
Horde_Constraint          1.0.0                stable
Horde_Controller          1.0.0                stable
Horde_Core                1.1.2dev201105300702 stable
Horde_Crypt               1.0.2                stable
Horde_Data                1.0.0                stable
Horde_DataTree            1.0.0                stable
Horde_Date                1.0.1                stable
Horde_Date_Parser         1.0.0                stable
Horde_Db                  1.0.1                stable
Horde_Editor              1.0.1                stable
Horde_Exception           1.0.1                stable
Horde_Feed                1.0.1dev201106011224 stable
Horde_Form                1.0.1                stable
Horde_Group               1.0.0                stable

How do you get these snapshots? Ensure you have the horde-component helper set up. Then enter the package you wish to snapshot (e.g. Horde_Core) and run:

horde-components snapshot

The snapshot package will be assembled in this directory, can be uploaded to your webserver and installed using:

pear upgrade --offline --force Horde_Core-1.1.2dev201105300702.tgz

What happens if you patched and deployed your package, sent the patch upstream, it gets accepted and a new package released?

pear upgrade horde/Horde_Core


Blogging to blogger.com via Horde_Feed

With Horde 4 being available as PEAR based components a lot of functionality that was hidden behind the Horde groupware applications have suddenly become available as building blocks for your own PHP based software. There is still a lot documentation that should get written. As one piece of the puzzle I will use my blog to post short tutorials on interesting things you might wish to do with the Horde PEAR packages.

Let's start with blog posts today...

Creating Atom blog posts becomes a rather simple task with the help of the Horde_Feed package. Install it via PEAR first:

pear channel-discover pear.horde.org
pear install horde/Horde_Feed

Start with creating an instance of Horde_Feed_Entry_Atom and populate it with content similar to the example below:

$entry = new Horde_Feed_Entry_Atom();
$entry->{'atom:title'} = 'Entry 1';
$entry->{'atom:title'}['type'] = 'text';
$entry->{'atom:content'} = '1.1';
$entry->{'atom:content'}['type'] = 'text';

The type could also be html or xhtml if the text in the corresponding field is not plain text. See the overview on the Atom format for that.

Now it is sufficient to post the entry with:

try {
} catch (Horde_Feed_Exception $e) {
    die('An error occurred posting: ' . $e->getMessage() . "\n");

In most situations this example will be somewhat too simplistic. Most sites will require you to authenticate before being able to add Atom entries. How such authentication information can be transmitted when posting the entry is detailed in the blogger.com example that comes with the Horde_Feed package. At the time I'm writing this the authentication is not yet available in the released package though - you will have to wait for the next release to hit pear.horde.org.

Tuesday, May 31, 2011

Horde continuous integration got updated

The Horde continuous integration server received an update to the most recent version of Jenkins. In addition two new jobs were added to the system: Kolab_Storage and Imap_Client. May they stay forever green! The total number of packages under CI surveillance is 48 by now.

Will have to think about ownCloud

While I'm still waiting for the Horde 4 interview with Jan Schneider to appear on Radio Tux I got a reminder to think about ownCloud. While writing this I am listening to the interview about ownCloud.

I'm not yet certain it is actually different to Horde. Of course the way both projects started and the ideas behind them differ. But a lot of the targets seem to be very similar. In any case I need to identify if there are any bridges that could be easily built.

I'm mainly interested in seeing what kind of APIs they provide to the outside and whether this could be offered by Horde as well to support connecting desktop clients. Or maybe Horde could be integrated as a data serving backend. Of course ownCloud still lacks one thing: a decent storage backend. No Kolab support so far ;) ...

A number of Kolab_* releases on files.kolab.org

Yesterday and this morning I released Kolab_FreeBusy-0.5.2, Kolab_Server-0.5.1, and Kolab_Storage-0.5.1. All of these are bug fix releases for the Kolab server.

With Horde 4 being deployed via pear.horde.org I cannot release the Kolab_* packages from the Horde 3 branch via that channel anymore. So you can expect updates to hit files.kolab.org exclusively. I don't like that too much but as Horde 3 was not yet released via PEAR and the Kolab_* releases are only targeted at the Kolab server this compromise makes sense.

Tuesday, May 17, 2011

Is Horde PHP?

LinuxTag was really nice for the Horde project because of all that positive feedback to our recent release and the valuable suggestions for stuff that would be nice to have in Horde. Might be worth another blog post. Here I just wanted to log one of the funnier conversations I had ...

"Is Horde PHP?"


"Do you use unit tests?"

"Absolutely. We got about 3000 of those."

"Oh, great. Mind writing an article about PHP Unit testing in Linux Magazine?"


Monday, May 16, 2011

Video of the Horde 4 talk on CeBIT

A while ago I presented a talk for the Horde Project on CeBIT 2011. The link to the corresponding video comes somewhat late but here it is: Horde 4 (the talk is in German).

I could have used better slides, I could have stopped saying "eh", and so on and so on - always nice to see yourself talking in order to improve. But I felt the content is okay and still relevant. Feedback welcome!

Blogging via Jonah with the Kolab backend

This should be the first blog entry using Horde 4 Jonah with a Kolab backend. This obviously still needs a lot of improvement but it means I can finally store one additional data element I work with in my favourite storage backend. And export it to the cloud on demand.

Thursday, February 24, 2011

Another intermediate Horde 4 release for Kolab-Server-2.2.4

Two days later than promised in the revised roadmap there is finally an "intermediate" release of Horde 4 for Kolab ready. With "intermediate" being the euphemistic word for "while it contains the dynamic calender that one is still pretty broken".

To install the release on a Kolab-Server-2.2.4 system the following commands should suffice:

wget http://files.pardus.de/horde4-20110224.sh
sh horde4-20110224.sh 

The usual warnings that went with the first Horde 4 release for Kolab apply for this release as well: Do not consider doing this on a productive server. This is just an early preview. And if you want to be able to see anything useful in the added calendar application you should ensure the user you log in with already has a calendar folder and has some data in it.

The state visible in the calendar frontend does not do the changes that happened in the backend any justice. But it can't be helped at the moment: While the Kolab backend for Horde is now largely complete the connection to the various Horde applications still need to be adapted to the changes in the backend.

Tuesday, February 22, 2011

Goodbye Hudson, hello Jenkins!


The Horde continuous integration setup switched from Hudson to Jenkins today. The switch was nothing I fancied because it meant fixing a number of CI setups that I created in the past months. However,.. with the core developer Kohsuke on the Jenkins team it didn't make much sense to stick to Hudson.

In order to make this switch feel at least somewhat productive I threw some additional updates into the pot. Here is a rough changelog:

Thursday, February 10, 2011

IMAP capabilities of the Apache Zeta "Mail" component


I have been quite busy on the Horde Kolab_Storage component these past weeks. That involved working with four different PHP IMAP libraries:

I will definitely summarize the results here at a later point as there are many IMAP and also PHP specific IMAP lessons to be shared.

Right now I was pondering over the Apache Zeta "Mail" component though. It is great that it is available as a stand alone component. And I know it has been described as "Doing mail right". Which it may do.

But when it comes to "Doing mail over IMAP right" I must admit that I'm not so certain. The code seems to be very weak in terms of IMAP capabilities and it apparently supports only a tiny fraction of the complexity the IMAP protocol offers. A complexity that is needed however to perform mail access in an efficient way. And a complexity that is also required for getting Kolab data access right.

What I would like to know is if my quick browsing of the code gave me the right impression. Is there anyone out there saying that the Zeta Apache "Mail" component can actually be efficient when dealing with IMAP? If so I would make the effort in adding another backend driver to Kolab_Storage and include this one in the comparison that will eventually result from the recent Kolab_Storage refactorings.

Tuesday, February 08, 2011

Horde 4 presentation at CeBIT 2011

Next month CeBIT 2011 will start and there will be a short presentation about Horde 4 at the Open Source Forum. It is scheduled for 2nd March 2011 at 16:45.

In case anyone wants to meet with developers from the Horde team during that day or around that day, some of us should be available in Hannover. Hope to see you there!

Edit: The schedule of talks for the Open Source Forum

Horde 4/Kolab road map update

The roadmap for Horde 4/Kolab that was published in November needs some adjustments. This represents an updated version:

  • [06.12.10] COMPLETED - Horde 4 Portal + Mail
  • [22.02.11] Horde 4 Calendar (alpha)
  • [22.03.11] Horde 4 Addressbook, Notes, Tasks (beta)
  • [12.04.11] Horde 4/Kolab Final

There were several factors that made the corrections necessary:

  • The Horde/Kolab integration machinery needs some more work to allow using the new Horde 4 IMAP capabilites to the full extent in order to benefit from a major performance boost. This delays the release of the calendar.
  • The number of pre-releases initially planned was just too high and needed to be reduced. This joins the releases of the other applications.
  • The Horde development team decided on a final release date for Horde 4 so it is now possible to add a release date for the first stable Horde 4/Kolab release.
  • Releases were switched to Tuesdays.

You are encouraged to watch the Horde commit stream. There are a lot of Kolab related commits flying by at the moment. More on the changes soon.

Wednesday, February 02, 2011

Ubuntu based Kolab-Server on EC2

During the last two years p@rdus published the new Kolab Server versions as ready-to-go Amazon EC2 images. These images were always based on Gentoo.

This base platform has recently been exchanged with Ubuntu as the underlying distribution. Since yesterday there is now a Kolab-Server-2.2.4 available as an image to be started and used for quick testing. You can expect all upcoming server version to be also based on Ubuntu. This does of course include the Kolab-Server-2.3 that will hopefully be released soon.

Instructions on how to use the Kolab Server images can be found in the Kolab wiki.

Tuesday, February 01, 2011

Horde Project Newsletter

The date for the next Horde4 release is drawing nearer and we felt it makes sense to provide another news channel for the Horde Project.

So we are getting ready to launch a new newsletter that will keep you up-to-date on the progress of the Horde Project. Delivered around once per month, it will contain highlights of the Horde project's development and provide insight into our features and project plans.

To receive Horde news please click here to sign up!

We promise to respect your privacy and will never share your email address with a third party.

Tuesday, January 25, 2011

Horde release cycle

... and finally Horde4 in 2011.

The gap between 3.0 and 4.0 has been too large. The Horde team tries to keep backward compatibility within a major version. Probably nobody tried running a recent Kronolith from 2010 on Horde 3.0 from 2004. There'd probably be some issues and limitations but they should be minor.

While this kind of long term support may be useful for some edge cases it can also impede development progress. A good example are the Kolab drivers in the currently released version: Kolab support within Horde started to improve in 2006. At that time Horde3 was already the active branch and there was some very basic Kolab functionality in there. So the new code had to keep those interfaces stable. Which basically meant twisting and bending it into code that would do just that but otherwise be really problematic. That code is still a reality in the current stable release from Horde3.

This and similar problems did not pass unnoticed though and changes to the release cycle have been discussed internally for a while already. With the release of Horde4 approaching it now makes sense to discuss such changes with the Horde community. The envisioned target is a time based release cycle.

The discussion started today on the Horde development mailing list. Feel free to listen in or to add your own comments if you want to influence the direction of our future release mode.

Friday, January 21, 2011

Hudson Quickie


After my third Hudson repository using the same installation procedure it was definitely necessary to finally extract the whole Hudson specific part into its own repository.

It is nothing fancy as the installation procedure with Hudson is pretty straightforward anyhow. But maybe you want to get Hudson quickly running on your own Linux machine with a few standard plugins pre-installed. Then the repository might be exactly what you need.

Just clone the repository with

git clone git://github.com/wrobel/hudson-install.git

and follow the "Install" instructions in the README. Of course you can also just fork the repo in case you need your own predefined set of plugins installed into Hudson.

Wednesday, January 19, 2011

The Horde4 package mill for Debian

As mentioned before p@rdus intends to provide Horde4 packages for Debian beside the Kolab specific OpenPKG based build. The first steps of this process have been taken now and another Hudson based package mill has been created for the task. The new system bundles Horde4 packages for Debian after each upstream commit and lives here.

This is just considered a first step on the way to a full Debian release as there will be a fair amount of quality control required to get the packages fit for wider distribution. In addition the exact policy on how to handle PEAR packages on Debian is apparently still under debate. p@rdus will join this discussion now with a set of more than 50 PEAR based packages in tow.

The timing for this seems just right as there are still a few weeks left until Horde4 will see its first release. So there should be enough time to ensure that there are no major conflicts between PEAR based and Debian packaging.