Sun - the dot in

Sorry about the lame pun. I couldn't resist.

How will the Sun acquisition affect Redpill Linpro and it's products? It's not quite clear yet, but I have a few ideas.

Hopefully, the Unbreakable Linux (OEL) thing will go away. Oracle just got a hold of Solaris. OEL doesn't add any value to the RHEL source and should just go and die somewhere. Don't get me wrong - OEL isn't a huge threat, it's just annoying.

If Oracle doesn't quickly state what it will do with Java - there might be a lot of uncertainty and therefore damage. Microsoft would benefit from this and try to take advantage of the situation.

Openoffice will live on and might even get stronger under Oracle. Openoffice does real damage to Microsoft and Oracle loves damaging Microsoft. Go Oracle!

MySQL is a bit odd. They started out as a very simple database - but lately they have, unsuccessfully, tried to make their database into a really complex product. Oracle will probably make it very clear that MySQL shouldn't compete with Oracle and it should do what it does best - being simple, fast data storage with and SQL interface. Oracle will hopefully push MySQL in the directing the drizzle fork is going.

Solaris will of course live on to slowly die together with the other Unix dialects.

ext4 and the Jaunty Jackalope

I've been longing for ext4 for a long time now. ext3 has a really broken implementation of fsync(2). Each fsync() turns into a sync(). If there is some activity on the file system your computer will slow to a halt. firefox uses sqlite and sqlite uses fsync() quite a lot. Ubuntu 9.04 will be released in about a month. I thought I might help out and test it and finally get a sane fsync(). Thirdly there a few really annoying bugs in Ubuntu 8.10 I hoped would be fixed. Great, I thought.

Upgrading is really quite simple. "sudo upgrade-manager -d" in a shell, press "next" a few times and you're done. If you're really crazy you want to convert file systems to ext4 as well, here is how you can do this.

First, find some sort of live linux distribution. I prefer PLD rescue. It's small, fast and it has all the tools and file systems you'll need. Make sure it supports ext4. If you don't want to convert your root file system you can skip this.

Boot the rescue system and get a root shell. Now you need to enable all the features that turn an ext3 system into a ext4 system:
# tune2fs -O extents,uninit_bg,dir_index /dev/DEVICE

Replace DEVICE with the real name of your, uhm, device. tune2fs did tell you that you'll need a fsck after you enabled the new features.

# e2fsck -fyD /dev/DEVICE

Rince and repeat for every filesystem on the system. Do a search/replace ext3/ext4 in /etc/fstab afterwards. Now you can reboot. You don't have to fuzz with the initramfs since the nice Ubuntu people finally built the ext4 support into the kernel. I guess they did this to save time on boots - but the fact that the whole system now autodetects what kind of root file system you have doesn't hurt.

Eject the live CD and reboot. Voila!

Google calc on the command line

At some point a wrote a currency calculator. It downloaded a list of current exchange rates from a bank, somewhere. The draw back was that it could only convert to and from NOKs.
I decided to give it a brush-up. After tinkering a bit with it I realized that google has quite a nice exchange rate calculator in its generic calculator. I wanted something I could operate from the command line so I went a found the WWW::Google::Calculator module on CPAN. On my ubuntu laptop this module is packaged in libwww-google-calculator-perl. The module does everything I needed, really, I just needed to to call it. Anyway, here is how it works:

perbu@thimk:~$ googlec 100 USD in EURO
100 U.S. dollars = 78.976465 Euros

The code is suprisingly simple:

use WWW::Google::Calculator;
my $calc = WWW::Google::Calculator->new;
print $calc->calc(join(' ',@ARGV)),"\n";

As a bonus it does all other sort of calculations:
perbu@thimk:~$ googlec "5*9+(sqrt 10)^3"
(5 * 9) + (sqrt(10)^3) = 76.6227766

Cool, eh?

perbu@thimk:~$ GET -Used|grep Varnish
X-Varnish: 1998896488 1998894734
  • Current Mood
    ecstatic ecstatic

This just in: iFolder is moving

From the Novell bugzilla:

Based on your commitment to have XXXX spearhead the connections with the community, and his efforts to help drive a vibrant community for both Kablink and iFolder, we are aligned to take the newer iFolder code (3.7) and open source it. So, officially we support open sourcing iFolder. We are aligned, and as such XXXX and I will inform the PMs and Development of that plan. I would ask that XXXX lead the effort to bring the teams together and perhaps kick off the new direction - perhaps build some team unity around the effort. We'll
need, at least initially, the engagement of PM, Development, Legal, Product Mktg,and potentially operations, etc.

XXXX is an anonymisation.

Its not to late. iFolder is a great product and if Novell play their cars right (don't screw up, that is) iFolder might be a success. I'd love to see it.



perbu@thimk:~$ GET -Used | grep Varnish
X-Varnish: 301823638

Who would have thunk it?

tar vs. dump

Some people claim dump is irrelevant. Linus Torvalds claimed at some point that dump was a relic of the past. The real issue was that there was no way in Linux to synchronize a file system at the time due to a silly bug in the kernel.

Well, a lot of people still find dump a useful tool. Its easy to use and its fast. In fact its really fast. tar and just about every other backup tool accesses the filesystem through the directory structure. The filesystem on disk is not ordered in the same way as its directory structure and the result is a lot of time spent seeking. dump opens the underlying device and accesses the data in its native order.

I ran a primitive benchmark just now:
  1. sync the filesystem (an ext3 filesystem on a encrypted volume).
  2. flush out the page and dentry caches (echo 3 > /proc/sys/vm/drop_caches)
  3. run the backup
I did this for four different backup jobs:
  1. full backup with tar
  2. incremental backup with tar
  3. full backup with dump
  4. incremental backup with dump
The results:
tar cf - /home/perbu37m 55s
tar --after-date 2008-11-01 -cf - /home/perbu
3m 59s
dump -f - /dev/vg0/perbu13m 22s
dump -f -T 'Fri Nov 01 00:00:00 2008 +0100'   /dev/vg0/perbu2m 22s
The results are quite clear. Dump is far superior to tar performance-wise.  A lot of sysadmins have problems making the backup stay within its window and dump is a very useful tool to those people.

I would guess that on a SSD the results would more or less be the same as the seek times are more or less zero. If someone gets me an SSD I'll make a post abount it. :-)

However, there is a price for this performance. If your filesystem is very active there might be changes that are not yet flushed out to disk - these data might not be backed up completely. To be 100% sure everything is backed up you might want to take a snapshot of the devices and dump this.For a personal computer however, the risk in negligible.

Happy dumping!


The iFolder community is dead. Sad. Sad. Sad.

Last week I saw the following post to the ifolder-dev-list - "HELP!". I'm not sure if its sent in error - but it illustrates the state of the iFolder project.

The iFolder community is dead. Sad, really, as iFolder is a really, really cool project. Unfortunately it got bitten by the same virus as everything Novell touches. Kind of a inverse King Midas syndrome - everything they touch seems to turn into crap (Word Perfect, Corel, Quatro Pro, Netware). Well, maybe not crap, but the software gets forgotten. iFolder could have ruled the world, more or less. Well, it could have taken a huge bite out of Microsofts Sharepoint revenue, at least. If they teamed up with some producer of SMB NAS boxes it would have been a huge success, I'm sure.

A few years ago iFolder was a pretty cool, and quite simple application, written in Java and able to run under most app-servers. Then this De Icaza character comes along with his patent-infected Mono crap and Novell suddenly feels the urge to reimplement iFolder in Mono. iFolder then gets seriously broken and Novell spend a couple of years getting iFolder back into a working condition. Along the way I think they reorganize a few times and suddenly iFolder is now a closed source project developed inside Novell. They didn't even bother to notify the mailing list or say something on the Wiki.

Blargh. Sad.

  • Current Mood
    aggravated aggravated
  • Tags

Wikipedia must die (or change radically)

I love Wikipedia. I use it several times every day. All of mankind working together towards a common goal - creating a complete encyclopaedia containing most of our knowledge.

There is a problem. Wikipedia is often considered to be the truth. So, having control over a certain set of Wikipedia articles gives power. Since Wikipedia allows you to hide behind a pseudonym your motives for editing the article might be very well hidden. Wikipedia tries to counter this by settling conflicts of interest in the appropriately named "conflict of interest board". This is not enough. First of all, a conflict of interest is really hard to spot if you don't know who has written the article. An unsuspecting journalist might run a background check on some topic and may be given a bias towards a specific topic. An controversial figure running a whistleblower campaign again some powerful entity might be smeared by an article making him pariah to the press. The Register ran a story which illustrates this well.

Secondly, there has a least been one account of people in the board having a conflict of interest themselves (see this article on The Register) making resolving the conflict rather difficult.

As long as Wikipedia allows editors to hide behind a pseudonym in cannot be considered a credible source for controversial topics. Wikipedia must change and become more transparent.