First impressions of the G1

Someone over on Hacker News asked me what I thought about the new G1 (aka “Google phone”), as compared to the iPhone (which Jamie wrote about here).  I figured those comments would be generally useful to folks thinking about picking between the two coolest phones available right now.  I’ve now had a full day to play with it, and I’ve spent several days tinkering with iPhones, so here’s my off-the-cuff “review” of the new G1.

The hardware is mildly disappointing. Not as nice “feeling” as the iPhone (though the 3G iPhone also now has a plastic back and doesn’t feel as nice as the first-gen iPhone, so I guess things are tough all over), or even as solid feeling as my old Sidekick. When taking the back off to put in the battery and SIM card, I felt like it was going to break. It didn’t, but it felt like it was. Likewise for the little covers for the data/charge port and the SD memory slot…they’re plastic and tiny and feel fragile. The one exception is the keypad, which feels very nice to me.  But, keeping in mind that the G1 is dramatically cheaper than the iPhone 3G ($179 up front and cheaper per month for the plan–though I haven’t actually looked at different plans, I just kept the one I’ve been on, which is about $10 cheaper than iPhone–so $240 over 2 years), I think it’s a great buy, and a whole lot of hardware for a very small price.  I think I probably would have been happy to pay $20 more for slightly sturdier construction, though, particularly on the port cover…I have a bad feeling that it’s going to get broken long before I’m ready to upgrade to a new phone.

Price wasn’t my primary deciding factor (openness was), but it certainly didn’t hurt that I wouldn’t have to pay more per month to upgrade to the G1, while I’d pay out $240 more over 2 years for the iPhone 3G for the same service. I’m going to see if I can drop down to a smaller voice plan–the Sidekick I had before had minimum plan requirements, but I didn’t see any such requirements when signing up for the G1, so I might even be able to save $5 or $10 per month with the G1 over the Sidekick, since I rarely use the phone. I use maybe 100 minutes per month of a 1000 minute plan.

Software-wise, it’s plain awesome. The lack of two-finger gestures, as found on the iPhone, is somewhat disappointing, but it’s no slower to use the popup magnifier buttons, once you’re accustomed to it. Dragging and such is smooth and accurate (seeimngly more accurate than the iPhone, for me, but maybe it just feels that way because the keypad means that I don’t have to use it for typing fiddly stuff, as on the iPhone), so I guess the touch screen is pretty good quality. Since the Android developers are some of the same folks that developed the Sidekick, everything feels very intuitive to me (where, as usual, “intuitive” means, “what I’m used to”). It also has Linux underneath everything, so that may also be a “comfort” factor for me, I’m not sure.

Basic phone features work well and are easy to use, it sounds good and clear, and having Google mail, contacts, calendar, etc. is sweet (my old phone couldn’t handle more than about 500 messages via IMAP, and I get more messages than that in a week, and obviously GMail just works great with practically infinite mail). Web pages look great, and browsing is fast. WiFi was quick and easy to setup. YouTube videos work great, both on 3G and on WiFi. It’s my understanding that T-Mobile’s 3G network is still somewhat small…so if you don’t live in the valley, your mileage may vary, but it works fine for me here in Mountain View.

I installed an Open Source ssh client off of the web called ConnectBot; no jail breaking required, which was a big issue for me with the iPhone. I don’t want to have to have permission to install arbitrary apps that I’ve written or someone else has written. I also installed Compare Anywhere, and a bubble game from the app store, and the quality of everything is really slick. Really impressive for a launch day catalog, especially since everything is free right now. I haven’t spent a lot of time with the apps on the iPhone, so I don’t have much to compare to. But, I’m excited to play with more stuff from the catalog, and I think I’m going to try my hand at writing an app or two for the platform.

Also, it worked right away when I plugged it into my Linux desktop. No futzing around with weird stuff to get music onto the device. The iPhone/iPod is a bitch in that regard. That one thing made me ecstatic in ways I haven’t felt over a device in a long time. Coming off of years of messing around with iPods and an iPhone, and it never quite working right, having a drag and drop music experience is miraculous. Take this with a grain of salt, as I may be strange. I find iTunes incredibly confusing and difficult to use, so on those occasions when I’ve given up on getting Linux to work and rebooted into Windows, or borrowed a friends Macbook, and run iTunes for the purpose of putting music onto the device, I’ve ended up spending a long time futzing around anyway, because I never could figure out what all the syncing options meant or how to use them…so, every time I would fiddle until something happened, and occasionally it would just end up deleting all of my songs either on the PC or the device, and I would give up in disgust. I also have trouble using several other Apple software products, and find them hard to use, so I could just be too stupid to be trusted with a computer without some adult supervision.

I think it will be interesting to see how this battle plays out in the market, and I’m certain that the G1 is just the opening salvo.  Even if the G1 “loses” the battle against iPhone (if selling out all available units even before the launch date can really be considered a “loss” in anyone’s book), and fails to pickup significant market share, there will be another battle in a few months, and the battles will come faster and more furiously as other manufacturers adopt Android.  And each one will wear Apple down, and will open a new front on which Apple will either have to engage or ignore.  For example, very low end phones will come into existence next year that provide smart phone capabilities, as will higher end devices with more capabilities or special purpose capabilities to answer niche markets.  Apple can’t fight on all fronts, and every niche it loses strengthens the value of the Android platform to developers, and thus to end users.  If Android sucked, like Windows Mobile, this wouldn’t be a cause for concern for the folks at Apple.  But Android doesn’t suck.  It’s really nice to use.  As nice as iPhone?  Maybe not…but getting better rapidly (if you tried the last developer release a few months ago, you’ll know that there have been a lot of improvements since then).

As with PC vs. Mac two decades ago, it’s a battle of two different ideologies.  On one side, you have openness: the ability for hardware makers to produce widely varying hardware while still providing the same basic user interface; and on the other, you have a single source eco-system: Apple designs every aspect of its products and can control every element of the user experience, down to the very applications that run on the platform.  Except, this time around, there are two additional elements: the telcos, and Open Source.  While the telcos are going to fight to keep all cell phones locked down pretty tightly, and try to insure that they can extract money for just about everything novel you want to do with it, the Open Source nature of Android is going to allow people to do things with mobile devices that have been impossible to date (even on very powerful, but locked down, devices like the iPhone).

To me, it looks like Apple is going to make the same mistakes they made with the Mac years ago: Treating their application developers poorly by competing with them unfairly or simply locking them out of the market, disrespecting users that want to use their devices in ways not imagined by Jobs and treating them not as customers but enemies, and finally, denial that price is a major factor in peoples purchasing decisions.  If it plays out as it did in the PC wars, Apple will find that they have fewer applications for their devices, and a steadily declining market share (even as actual sales continue to increase, since the smart phone market is growing rapidly, and all ships rise in a growing market).  Since it isn’t Apple vs. Microsoft, this time, but instead Apple vs. Open Source (and openness in general), I know which side I’m on.  I’ve had a tendency to pull for the scrappy underdog in the past, and Apple has very frequently been the scrappy underdog…but unfortunately it’s an underdog with a Napoleon complex, so it’s not exactly a good choice for replacing the old tyrant.  But, luckily, this time around, we have a wide open alternative…and it’s actually really good and really easy to use.  Maybe the “Open Source on the desktop” movement, to date, has all been preparation for this moment…the moment when Open Source can finally be a great option for regular, non-technical, consumers.  I think that’s pretty exciting.  And we’ll just have to wait and see if Apple winds up on the wrong side of history, again.

Creating an iPhone UI for Virtualmin

Introduction and History

Around the start of the year, I created a theme for Webmin designed to make it easier to access from mobile devices like smartphones and cellphones with simple built-in browsers. At the time I had a Treo 650, which had a very basic browser – certainly not powerful enough to render the standard Virtualmin or Webmin framed themes.

By using Webmin’s theming features, I was able to create a UI that used multiple levels of menus to access global and domain-level settings, instead of frames. The theme also changed the layouts of forms to be more vertical than horizontal, use fewer tables, and remove all use of Javascript and CSS for hiding sections.

This was released in the virtual-server-mobile theme package version 1.6, and all was good in the world. Anyone using it could now access all the features of Virtualmin from a very basic browser, and read mail in Usermin without having to rely on the awful IMAP implementations in most smartphones.

This shows what Virtualmin looked like on a Treo :

Then I bought an iPhone.

It has a much more capable browser, technically the equal of any desktop browser like Firefox or IE. The regular Webmin themes actually worked fine, although a fair bit of zooming is needed to use them. The mobile theme looked like crap, as it didn’t use any of the browser features like CSS and Javascript that the iPhone supports. Plus the layout rendered poorly due to the use of long lines of text that didn’t get wrapped at the browser’s screen width.

On the iPhone, the Create Alias page in mobile theme looked like this :

And in the regular Virtualmin theme, the Create Alias page looked like :

I mentioned this to Joe, and he pointed me at iUI, an awesome library of CSS and Javascript that allows developers to create websites that mimic the look of native iPhone applications. After trying out the demos and looking at their source code, it was clear that iUI would be perfect for creating an iPhone-specific theme.

It wasn’t quite as simple as I first thought, but after some hacking on both the theme code and iUI itself I was able to come up with a pretty good layout, as you can see in this screenshot of the Create Alias page :

Menu Implementation

Actually getting IUI to play nicely with the Webmin theming system was slightly more complex than I originally expected though. For example, an iPhone-style multi-level menu that slides to the left is implemented in IUI with HTML like :

<ul id='main' title='My Menu' selected='true'>
<li><a href='#menu1'>Submenu One</a></li>
<li><a href='#menu2'>Submenu Two</a></li>
</ul>
<ul id='menu1' title='Submenu One'>
<li><a href='foo.cgi'>Foo</a></li>
<li><a href='bar.cgi'>Bar</a></li>
</ul>
<ul id='menu2' title='Submenu Two'>
<li><a href='quux.cgi'>Quux</a></li>
<li><a href='#page'>Some page</a></li>
</ul>
<div id='page' class='panel' title='Some page'>
Any HTML can go here.
</div>

As you might guess, CSS and Javascript are used to show only one menu or div at a time, even though they are all in the same HTML file. This is quite different to the way menus are usually created in Webmin.

To get this kind of HTML from the theme, I created an index.cgi that generates a large set of <ul> lists and <div> blocks containing all the Virtualmin domains, global settings, Webmin categories and modules. This is loaded by the iPhone when a user logs in, and allows quick navigation to without any additional page loads. For example, these screenshots show the path down to the Users and Groups module. Only the last requires an extra page load :

The index.cgi script is able to fetch all Webmin modules and categories with the functions get_visible_module_infos and list_categories, which are part of the core API. It also fetches Virtualmin domains with virtual_server::list_domains and global actions with virtual_server::get_all_global_links.

For example, the code that generates the menus of modules and categories looks roughly like :

my @modules = &get_visible_module_infos();
my %cats = &list_categories(\@modules);
print "<ul id='modules' title='Webmin Modules'>\n";
foreach my $c (sort { $b cmp $a } (keys %cats)) {
    print "<li><a href='#cat_$c'>$cats{$c}</a></li>\n";
    }
foreach my $c (sort { $b cmp $a } (keys %cats)) {
    my @incat = grep { $_->{'category'} eq $c } @modules;
    print "<ul id='cat_$c' title='$cats{$c}'>\n";
    foreach my $m (sort { lc($a->{'desc'}) cmp lc($b->{'desc'}) } @incat) {
        print "<li><a href='$m->{'dir'}/' target=_self>$m->{'desc'}</a></li>\n";
        }
    print "</ul>\n";
    }
print "</ul>\n";

The actual IUI styling and menu navigation comes from CSS and Javascript files which are referenced on every page in the <head> section, generated by the theme’s theme_header function which overrides the Webmin header call.

Other Pages

Other pages within Webmin are generated using the regular CGI scripts, but with their HTML modified by the theme. This is done by overriding many of the ui_ family of functions, in particular those that generate forms with labels and input fields. Because the iPhone screen is relatively narrow, it is more suited to a layout in which all labels and inputs are arranged vertically, rather than the Webmin default that uses multiple columns.

For example, the theme_ui_table_row override function contains code like :

if ($label =~ /\S/) {
    $rv .= "<div class='webminTableName'>$label</div>\n";
    }
$rv .= "<div class='webminTableValue'>$value</div>\n";

The label and value variables are the field label and input HTML respectively. The actual styling is done using CSS classes that were added to IUI for the theme. The same thing is done in functions that render multi-column tabs, tabs and other input elements generated with ui_ family functions.

The only downside to this approach is that not all Webmin modules have yet been converted to use the functions in ui-lib.pl, and so do not get the iPhone-style theming. However, I am working on a long-term project to convert all modules from manually generated HTML to the using the UI library functions.

Headers and Footers

In most Webmin themes, there are links at the bottom of each page back to previous pages in the heirarchy – for example, when editing a Unix group there is a link back to the list of all groups.

However, IUI puts the back link at the top of the page next to the title, as in native iPhone applications. Fortunately, CSS absolute positioning allows the theme to place this link at the top, even though it is only generated at the end of the HTML. The generated HTML for this looks like :

<div class='toolbar'>
<h1 id='pageTitle'></h1>
<a class='button indexButton' href='/useradmin/index.cgi?mode=groups' target=_self>Back</a>
<a class='button' href='/help.cgi/useradmin/edit_group'>Help</a>
</div>

The toolbar CSS class contains the magic attributes needed to position it at the top of the page, even though the theme outputs it last.

Old School to New School: Refactoring Perl (part 2)

When I left off, I’d made an old chunk of Perl code (that mostly pre-dates widespread availability of Perl 5) warnings and strict compliant, converted it to be easily usable as both a module and a command line script, added some POD documentation, and built a couple of rudimentary tests to make it possible to change the code without fearing breakage. Now we can get rough with it.

Refactoring for Clarity and Brevity

Despite the changes, so far, the code is pretty much as it was when we started. A little more verbose due to the changes for strict compliance, so that’s a negative, and the addition of a main function and the oschooser wrapper just adds even more lines of code. The main code block was already a little bit long for comfort at several pages worth of 80 column text, assuming a 50 row editor window. Jamie seems capable of holding a lot of code in his head at once…me, I’m kinda slow, so I like small digestible chunks. So, let’s start digesting.

Looking through the code, this bit jumped right out. It’s a little bit unwieldy:

    if ($auto == 1) {
      # Failed .. give up
      print "Failed to detect operating system\n";
      exit 1;
      }
    elsif ($auto == 3) {
      # Do we have a tty?
      local $rv = system("tty >/dev/null 2>&1");
      if ($?) {
        print "Failed to detect operating system\n";
        exit 1;
        }
      else {
        $auto = 0;
        }
      }
    else {
      # Ask the user
      $auto = 0;
      }

It seemed like this could be shortened a little bit by making a have_tty function, and using && in the elsif. Not a huge difference, but if we then flip the tests over (there’s only four possible values of $auto and they only result in two possible outcomes) and add an || we can lose a few more lines, one conditional, and cut it down to:

    if (($auto == 3 && have_tty()) || $auto == 2) {
      $auto = 0;
      }
    else {
      # Failed .. give up
      print "Failed to detect operating system\n";
      exit 1;
      }

That’s a lot less code to read. I also think it makes more sense to have a single failure block and a single block setting $auto. I’m still trying to figure out the purpose of auto=2, since it seems like it would only be possible to fall back to asking a question, if there’s actually a TTY. But Jamie knows the quirks of various systems far better than I do, so we’ll leave it alone, for now, and keep the same behavior. I bet it’s to accommodate something funny on Windows!

I’ve also made a few tweaks, during this process, adding a package OsChooser; statement to the beginning of the file. I also discovered that $uname actually is being used in this code! It’s hidden inside the os_list.txt definitions. There are a couple of eval statements being used to execute arbitrary code on each line found in the OS list. Jamie must have been a Lisp hacker in a former life, with all this willy nilly mixing code and data. This is a pretty clever bit of code, but it took me a little while to grok, but now that I know what’s going on, we can solve some of the problems we had with testing detection of systems other than the one we’re running on.

In the meantime, just know that $uname has returned as an our variable, so that it can be “global” without ticking off strict.

More Tests

So, automated testing of an OS detection program isn’t a whole lot of good, if I can only test detection of one operating system (the one it happens to be running on right now). So, we need to introduce a bit more flexibility in where the OS-related data comes from. This is trickier than it sounds. UNIX and Linux has never standardized on one single location for identifying the OS. Many systems identify themselves in /etc/issue, while those from Red Hat use /etc/redhat-release (they also have a reasonable issue file, but I’m guessing it’s not reliably present or reliably consistent in its contents, as Jamie has chosen to use the release file, instead), and Debian has a /etc/debian_version file. Sun Solaris and the BSD-based systems seem to all use uname, and that’s just the popular ones. Webmin also supports a few dozen more branches of the UNIX tree, plus most modern Windows versions!

So, looking at oschooser.pl you’re probably wondering where the heck all of that extra stuff happens, because it doesn’t really have any detection code of its own. The answer is in os_list.txt, which is a file with lines like the following:

Fedora Linux     "Fedora $1" fedora  $1    `cat /etc/fedora-release 2>/dev/null` =~ 
/Fedora.*\s([0-9\.]+)\s/i || `cat /etc/fedora-release 2>/dev/null` =~ /Fedora.*\sFC(\S+)\s/i

This is a tab-delimited file. Why tabs? I have no idea, and it’s been a source of errors for me several times…even Jamie isn’t sure why he chose tabs as the delimiter, but that’s the way it is. It is plain text, plus numbered match variables, plus an optional snippet of Perl that will be executed via eval if it exists. This makes for an extremely flexible and powerful tool, if a wee bit intimidating on first glimpse.

So, that last field is the tricky bit. The thing I’m going to have to contend with if I want to be able to test every OS that Webmin supports, rather than just the one that happens to be sitting under the code while the tests are running. I’ll need a new argument to our oschooser function for starters, called $issue, which will generically contain whatever it is that os_list.txt uses to recognize a particular OS. On my Fedora 7 desktop system, that’s /etc/redhat-release, which contains:

Fedora release 7 (Moonshine)

So, oschooser now contains:

sub oschooser {
my ($oslist, $out, $auto, $issue) = @_;
...
}

Next, we need to make sure we keep the provided $issue if we got it, so we change this:

  # Try to guess the OS name and version
  if (-r "/etc/.issue") {
    $etc_issue = `cat /etc/.issue`;
    }
  elsif (-r "/etc/issue") {
    $etc_issue = `cat /etc/issue`;
    }
  $uname = `uname -a`;

Into:

# Try to guess the OS name and version
my $etc_issue;
if ($issue) {
  $etc_issue = `cat $issue`;
  $uname = $etc_issue; # Strangely, I think this will work fine.
  }
elsif (-r "/etc/.issue") {
  $etc_issue = `cat /etc/.issue`;
  }
elsif (-r "/etc/issue") {
  $etc_issue = `cat /etc/issue`;
  }

Note that $uname is defined earlier in the code now…and merely gets over-written if we’ve set the $issue variable in our function call.

And then we have to do something about the contents of the last field in os_list.txt before it gets evaluated. This is where it gets a little hairy. In the foreach that iterates through each line in the file testing whether we have a match or not, I’ve added a new first condition, so it now looks like:

foreach my $o (@list) {
  if ($issue && $o->[4]) {
    $o->[4] =~ s#cat [/a-zA-Z\-]*#cat $issue#g;
    } # Testable, but this regex substitution is dumb.XXX
  if ($o->[4] && eval "$o->[4]") {
    # Got a match! Resolve the versions
    $ver_ref = $o;
    if ($ver_ref->[1] =~ /\$/) {
      $ver_ref->[1] = eval "($o->[4]); $ver_ref->[1]";
      }
    if ($ver_ref->[3] =~ /\$/) {
      $ver_ref->[3] = eval "($o->[4]); $ver_ref->[3]";
      }
    last;
    }
  if ($@) {
    print STDERR "Error parsing $o->[4]\n";
    }
  }
  return $ver_ref;
}

Which performs a substitution on the last field, if it contains a cat command. It replaces it with the issue file that we’ve provided in the $issue variable. Thus, we can now pass in t/fedora-7.issue and put a copy of the /etc/redhat-release file mentioned above, and we’ll be able to test detection of Fedora 7, no matter what operating system the test is actually running on. I suspect we may run into trouble when we expand our os_list.txt to the full Webmin list, since I’m working with just the limited subset of systems the Virtualmin installer supports (or that I might support in the next year or so). I’ve made a comment in the code with XXX (merely a convention used in the Webmin codebase, though any odd sequence of characters that you’ll remember works fine…many folks use FIXME) to remind myself of this suspicion later if I do run into problems that this is the first place I ought to look.

After these changes, it’s possible to get serious about testing. So, I’ve added tests for a couple dozen systems, which was more Googling than coding due to the data-driven nature of my tests, and confirmed the new code is behaving identically to the old. Which means it’s time for…

More Refactoring

If you’ve been following along, you know that oschooser is still awfully long. A good tactic in such situations is to look for bits of functionality that can be pushed down into their own subroutines. One good choice is the parsing of patterns file at the very beginning of the function:

my @list;
my @names;
my %donename;
open(OS, $oslist) || die "failed to open $oslist : $!";
while(<OS>) {
  chop;
  if (/^([^\t]+)\t+([^\t]+)\t+([^\t]+)\t+([^\t]+)\t*(.*)$/) {
    push(@list, [ $1, $2, $3, $4, $5 ]);
    push(@names, $1) if (!$donename{$1}++);
    $names_to_real{$1} ||= $3;
    }
  }
close(OS);

This is a good place to start, because it only depends on one variable from outside the work, $oslist, which is the name of the OS definitions file. And, of course, file access is always a good candidate for abstraction…what if, some day, we want to pull these definitions from a database or a __DATA__ section? Having it all in one obvious location might be a win. For now, I just want that bloody long oschooser function to be a little bit shorter, so we’ll create this parse_patterns function:

sub parse_patterns() {
my ($oslist) = @_;
my @list;
my @names;
my %donename;
# Parse the patterns file
open(OS, $oslist) || die "failed to open $oslist : $!";
while(<OS>) {
  chop;
  if (/^([^\t]+)\t+([^\t]+)\t+([^\t]+)\t+([^\t]+)\t*(.*)$/) {
    push(@list, [ $1, $2, $3, $4, $5 ]);
    push(@names, $1) if (!$donename{$1}++);
    $NAMES_TO_REAL{$1} ||= $3;
    }
  }
close(OS);
return (\@list, \@names);
}

That’s not too bad, and it shaves about 13 lines off of oschooser at a cost of 3 or 4 more lines of function baggage in the whole file. The biggest irritant might be that I’m now passing around two array refs (one of which is already an array reference, so now we’ve got a reference to an array of references). I get confused when I use too many references, because I’m addle-brained that way, but these are only a little bit nested and so not too complicated, so I think future readers of the code should be fine. At least, no worse than they were before I got ahold of this script.

I’ve also converted %names_to_real to %NAMES_TO_REAL as it has become a package scoped global variable, and it’s considered good form to warn folks when they’ve come upon a global by shouting at them. Of course, I have another global, $uname, which I haven’t renamed to all caps, as one of my mandates for myself on this project is to require no changes to the Webmin os_list.txt. As I write this, I’m beginning to have second thoughts about $uname needing to be a global…so we’ll come back to that later.

Capturing the results of parse_patterns and dumping them out into @name and @list lets us run our tests again.

And More Refactoring Still

Things have improved a little in oschooser. It almost fits into two screenfuls on my 20″ monitor. But I think we can do better. I’m aiming for one page or less per function, in this exercise, so we’ve gotta keep moving. The next distinct piece of functionality I see is the automatic OS detection code, so I’ll add a new auto_detect function, something like this:

sub auto_detect() {
my ($oslist, $issue, $list_ref) = @_;
my $ver_ref;
my @list = @$list_ref;
my $uname = `uname -a`;
 
# Try to guess the OS name and version
my $etc_issue;
 
if ($issue) {
  $etc_issue = `cat $issue`;
  $uname = $etc_issue; # Strangely, I think this will work fine.
  }
elsif (-r "/etc/.issue") {
  $etc_issue = `cat /etc/.issue`;
  }
elsif (-r "/etc/issue") {
  $etc_issue = `cat /etc/issue`;
  }
 
foreach my $o (@list) {
  if ($issue && $o->[4]) {
    $o->[4] =~ s#cat [/a-zA-Z\-]*#cat $issue#g;
    } # Testable, but this regex substitution is dumb.XXX
  if ($o->[4] && eval "$o->[4]") {
    # Got a match! Resolve the versions
    $ver_ref = $o;
    if ($ver_ref->[1] =~ /\$/) {
      $ver_ref->[1] = eval "($o->[4]); $ver_ref->[1]";
      }
    if ($ver_ref->[3] =~ /\$/) {
      $ver_ref->[3] = eval "($o->[4]); $ver_ref->[3]";
      }
    last;
    }
  if ($@) {
    print STDERR "Error parsing $o->[4]\n";
    }
  }
  return $ver_ref;
}

You may note that I did rethink the globalization of $uname and found that it fit comfortably into this block, so I’ve killed a global introduced earlier in this process. Now I’ve not even sure why I thought I needed it somewhere else, which is a nice thing about refactoring: You realize how little you understood what was going on when you first looked at the code. Here’s also where we make use of the @list built during parse_patterns, and I dereference it before using it, though that’s probably more verbosity than needed. I could also access it directly within the ref with, @{$list_ref}.

Finally, I’m returning $ver_ref, which contains a reference to an array of fields that describes the operating system detected. Now that I’ve split this out, I realize that this OS version array could quite easily be mapped into a hash and turned into an object rather trivially, but that’s an exercise for another day. For now, I just want to feel confident that I’ve made a functionally identical clone of oschooser.pl that I can use and extend painlessly and without fear of breakage. So, let’s keep going.

A Few More New Functions, and Killing Unused Code Softly

As with auto_detect there is a big chunk of code that is used specifically for asking the user to choose the operating system and version from a list of options. This is triggered in the following cases: $auto is set to 0 or any other false value, $auto is not false but auto-detection failed and one of the non-exit auto options is chosen and viable. So, we can easily break out this whole bunch of functionality into its own function, called ask_user. Like auto_detect, it requires the $list_ref array reference, and it also needs the $names_ref, since it will be interacting with the end user and they’ll be more comfortable seeing the “real names” of the available operating systems. Also like auto_detect, it returns a $ver_ref which points to the array containing the full description of the OS.

When I got to this function, I noticed a huge block of unused code, which provides support for the dialog command on systems that support it (mostly just Red Hat based Linux distributions). dialog is a simple tool for adding attractive ncurses interfaces to shell scripts. I’m not sure why the code is being skipped with an if (0) statement, but I have only two choices for what to do about it, if my goal is to simplify this script and make it more robust: Enable it and fix whatever problems it has, possibly making it into its own reusable and independently testable function; or, simply remove the code altogether. Webmin and the installer libraries for Virtualmin are both in SVN. If I decide to remove the code, it won’t be lost forever…I could pull it back in the future. I could even tag the current version with “pre-dialog-removal” before stripping it out. After consulting with Jamie the last option is the one I’ve chosen. So, we can kill not just those pieces of code, we can also remove the has_command function, since it is only used in that part of the code. Big win!

So, I’ll make a tagged copy before ripping stuff out:

svn cp lib tags/lib/pre-dialog-removal

Now I know I can always go back and refer to that code if I want to. It’s not really particularly precious, but it’s a good practice to get into, since copies in Subversion are cheap and fast (likewise for git, and most other modern distributed revision control systems), and I never know when I might want to go back and see how something was done before. I’ll do the same in the Webmin tree before I make the changes needed to merge the new OsChooser.pm in place of the old oschooser.pl.

So, after killing the dialog pieces of the code, and converting the user interaction to its own function, we have:

# ask for the operating system name ourselves
sub ask_user {
my ($names_ref, $list_ref) = @_;
my @names = @$names_ref;
my @list = @$list_ref;
my $vnum;
my $osnum;
my $dashes = "-" x 75;
print <<EOF;
For Webmin to work properly, it needs to know which operating system
type and version you are running. Please select your system type by
entering the number next to it from the list below
$dashes
EOF
{
my $i;
for($i=0; $i<@names; $i++) {
  printf " %2d) %-20.20s ", $i+1, $names[$i];
  print "\n" if ($i%3 == 2);
  }
print "\n" if ($i%3);
}
print $dashes,"\n";
print "Operating system: ";
chop($osnum = <STDIN>);
if ($osnum !~ /^\d+$/) {
  print "ERROR: You must enter the number next to your operating\n";
  print "system, not its name or version number.\n\n";
  exit 9;
  }
if ($osnum < 1 || $osnum > @names) {
  print "ERROR: $osnum is not a valid operating system number.\n\n";
  exit 10;
  }
print "\n";
 
# Ask for the operating system version
my $name = $names[$osnum-1];
print <<EOF;
Please enter the version of $name you are running
EOF
print "Version: ";
chop($vnum = <STDIN>);
if ($vnum !~ /^\S+$/) {
  print "ERROR: An operating system number cannot contain\n\n";
  print "spaces. It must be like 2.1 or ES4.0.\n";
  exit 10;
  }
print "\n";
return [ $name, $vnum,
    $NAMES_TO_REAL{$name}, $vnum ];
}

Not too bad. It just fits into one 50 row editor window without scrolling, so that’s a small enough bite for me. We make use of the %NAMES_TO_REAL global in this function, to convert from the short names to the longer human-friendly names, and I’m beginning to get a vague feeling something could be done to encapsulate that functionality, even without making this an Object Oriented library (which seems like overkill for such a simple program), so I’ll probably be coming back to that global in a later post (and I thought I would have a hard time getting two full posts worth out of this exercise!).

Wrapping Up

I’m feeling pretty good about the code now. I think it’s more readable than before I started messing with it, it’s certainly shorter due to bits of refactoring and some removal of dead or redundant code, and it’s got quite a few tests. All of its variables are reasonably scoped to the areas where they are used, except for %NAMES_TO_REAL, which is a package scoped my variable (turns out eval gets the scope of the containing block, so it doesn’t need to be an our variable as I’d first assumed).

The various utility functions aren’t very useful to outsiders and may change…the only function I really want to be public is oschooser, so I can see several opportunities for further enhancements, like encapsulating the rest into private methods within an OsChooser object. But that’ll be a project for another day. You can see the current code, plus an example os_list.txt. Next time, I’ll begin work on wrapping this up for CPAN, and releasing a large OS definition list (Webmin’s list is incredibly long and detects hundreds of systems and versions, but needs a bit of massaging to be generically useful, due to its own internal version requirements).

Who knew one simple script could present so much interesting work? It’s been so much fun, I think I’ll start a Perl Neighborhood Watch and do this to every little Perl script I come across. Who’s with me!? (Or maybe I should just focus on our own code for a little while longer, since we’ve got quite a few nooks and crannies that haven’t seen any attention in years. Perl makes us lazy with its peskily perfect backward compatibility.)

Old School to New School: Refactoring Perl

At YAPC::NA I sat in on lots of great talks (I also won Randal Schwartz in the charity auction, and so got to be beaten soundly at pool by him, and learn a few things about Smalltalk and Seaside). In particular, Michael Schwern gave a fantastic talk entitled Skimmable Code: Fast to Read, Fast to Change‎. This got me thinking about our own code. Webmin is an old codebase, approaching 11 years old, and thus has some pretty old school Perl practices throughout. Coding standards sort of stick to projects over a few years, and as new code comes in, it tends to look like the old code. And, to add to that momentum, Jamie has religiously kept compatibility for module authors throughout the entire life of the project. Modules written ten years ago can, astonishingly, be expected to work identically in todays Webmin, though they might not participate in logging or advanced ACLs or other nifty features that have come to exist in the framework in that time.

So, when I found myself needing to make a modification to oschooser.pl, a small program for detecting the operating system on which Webmin is running (sounds trivial, but when you realize that Webmin runs on hundreds of operating systems and versions, it turns out to be a rather complex problem), I decided to take the opportunity to put into practice some of the niceties of modern Perl. This article is a little different than what I usually write for In the Box, in that it covers a lot of ground fast, and most of it is probably pretty mundane stuff for folks already writing modern Perl. But, I think there’s enough old Perl code running around out there, running the Internet and such, that it’s worth talking about modernization work.

So, let’s go spelunking!

Introduction to oschooser.pl

The code we’ll be picking apart, and putting back together, is probably one of the more heavily used pieces of Perl code, and certainly one of the oldest, in the wild. It’s the OS detection code that Webmin and Usermin use to figure out what system they’re running on during installation. With Webmin having 12 million (give or take several million) downloads over its ten year history, this equals a lot of operating systems successfully detected. Perhaps I should have picked something a little less important for my first stab at modernization, but I’ve rarely been accused of being smart about making sweeping changes! (Jamie will reel me in, before I break actual Webmin code. I manage to break Virtualmin every now and then…but he’s more suspicious when I check code into Webmin, since it happens quite rarely.)

The oschooser.pl program actually loads up a rather complex definitions file called os_list.txt (by default, though it’s configurable, and we use different lists for Virtualmin and Webmin, since they have different requirements for version identification). The definitions file can contain snippets of Perl code, which will be executed via eval, when appropriate. Most of the updates to OS detection over the years have happened in os_list.txt, so oschooser.pl hasn’t seen a lot of grooming over the years, which makes it a prime candidate for modernization. Assuming, of course, that it works identically when I’m done with it.

Where to start?

My end goal with this project is to make oschooser.pl usable as a library from Perl programs, since our new product installer is written in Perl rather than POSIX shell. I also figured it’d be nice to make it testable, since I’ve made several mistakes in the detection code (in os_list.txt, specifically) over the past few years that led to our product being uninstallable on some systems until the bug was tracked down. But, first things first. Almost nothing in Webmin is strict compatible, and even warnings can cause some complaints, so that seems like a good starting point.

The code we’re starting with can be found here, so you can follow along at home.

Enabling warnings reveals the following (don’t worry about the arguments for now):

$ perl -w oschooser.pl os_list.txt outfile 1
Name "main::uname" used only once: possible typo at oschooser.pl line 31.
Name "main::donename" used only once: possible typo at oschooser.pl line 17.

Not too bad, actually. Just a couple of variables that are only seen once, easy enough to fix by giving them a my declaration. Though, in this case, it looks like enabling warnings turns up some unused code. While donename is actually keeping track of what names we’ve seen, so far, and it’s one of several idiomatic ways to build an array of unique values, the uname variable seems to have no purpose. So I’m going to kill that whole line rather than declare it.

Next up in our “low-hanging fruit” exercise is enabling use strict. Turns out this is quite a lot more intimidating:

$ perl -c oschooser.pl
Global symbol "$oslist" requires explicit package name at oschooser.pl line 15.
Global symbol "$out" requires explicit package name at oschooser.pl line 15.
Global symbol "$auto" requires explicit package name at oschooser.pl line 15.
Global symbol "$oslist" requires explicit package name at oschooser.pl line 16.
Global symbol "$oslist" requires explicit package name at oschooser.pl line 16.
Global symbol "@list" requires explicit package name at oschooser.pl line 20.
Global symbol "@names" requires explicit package name at oschooser.pl line 21.
Global symbol "%names_to_real" requires explicit package name at oschooser.pl line 22.
Global symbol "$auto" requires explicit package name at oschooser.pl line 27.
Global symbol "$etc_issue" requires explicit package name at oschooser.pl line 30.
Global symbol "$etc_issue" requires explicit package name at oschooser.pl line 33.
Global symbol "$o" requires explicit package name at oschooser.pl line 36.
Global symbol "@list" requires explicit package name at oschooser.pl line 36.
Global symbol "$o" requires explicit package name at oschooser.pl line 37.
Global symbol "$o" requires explicit package name at oschooser.pl line 37.
Global symbol "$ver" requires explicit package name at oschooser.pl line 39.
Global symbol "$o" requires explicit package name at oschooser.pl line 39.
Global symbol "$ver" requires explicit package name at oschooser.pl line 40.
Global symbol "$ver" requires explicit package name at oschooser.pl line 41.
Global symbol "$o" requires explicit package name at oschooser.pl line 41.
Global symbol "$ver" requires explicit package name at oschooser.pl line 41.
Global symbol "$ver" requires explicit package name at oschooser.pl line 43.
Global symbol "$ver" requires explicit package name at oschooser.pl line 44.
Global symbol "$o" requires explicit package name at oschooser.pl line 44.
Global symbol "$ver" requires explicit package name at oschooser.pl line 44.
Global symbol "$o" requires explicit package name at oschooser.pl line 49.
Global symbol "$ver" requires explicit package name at oschooser.pl line 53.
Global symbol "$auto" requires explicit package name at oschooser.pl line 54.
Global symbol "$auto" requires explicit package name at oschooser.pl line 59.
Global symbol "$rv" requires explicit package name at oschooser.pl line 61.
Global symbol "$auto" requires explicit package name at oschooser.pl line 67.
Global symbol "$auto" requires explicit package name at oschooser.pl line 72.
Global symbol "$auto" requires explicit package name at oschooser.pl line 77.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 80.
Global symbol "$i" requires explicit package name at oschooser.pl line 81.
Global symbol "$i" requires explicit package name at oschooser.pl line 81.
Global symbol "@names" requires explicit package name at oschooser.pl line 81.
Global symbol "$i" requires explicit package name at oschooser.pl line 81.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 82.
Global symbol "$i" requires explicit package name at oschooser.pl line 82.
Global symbol "@names" requires explicit package name at oschooser.pl line 82.
Global symbol "$i" requires explicit package name at oschooser.pl line 82.
Global symbol "$tmp_base" requires explicit package name at oschooser.pl line 84.
Global symbol "$temp" requires explicit package name at oschooser.pl line 85.
Global symbol "$tmp_base" requires explicit package name at oschooser.pl line 85.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 86.
Global symbol "$temp" requires explicit package name at oschooser.pl line 86.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 87.
Global symbol "$temp" requires explicit package name at oschooser.pl line 87.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 88.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 88.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 89.
Global symbol "$name" requires explicit package name at oschooser.pl line 96.
Global symbol "@names" requires explicit package name at oschooser.pl line 96.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 96.
Global symbol "@vers" requires explicit package name at oschooser.pl line 97.
Global symbol "$name" requires explicit package name at oschooser.pl line 97.
Global symbol "@list" requires explicit package name at oschooser.pl line 97.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 98.
Global symbol "$i" requires explicit package name at oschooser.pl line 99.
Global symbol "$i" requires explicit package name at oschooser.pl line 99.
Global symbol "@vers" requires explicit package name at oschooser.pl line 99.
Global symbol "$i" requires explicit package name at oschooser.pl line 99.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 100.
Global symbol "$i" requires explicit package name at oschooser.pl line 100.
Global symbol "$name" requires explicit package name at oschooser.pl line 100.
Global symbol "@vers" requires explicit package name at oschooser.pl line 100.
Global symbol "$i" requires explicit package name at oschooser.pl line 100.
Global symbol "$cmd" requires explicit package name at oschooser.pl line 102.
Global symbol "$temp" requires explicit package name at oschooser.pl line 102.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 103.
Global symbol "$temp" requires explicit package name at oschooser.pl line 103.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 104.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 104.
Global symbol "$temp" requires explicit package name at oschooser.pl line 105.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 106.
Global symbol "$ver" requires explicit package name at oschooser.pl line 110.
Global symbol "@vers" requires explicit package name at oschooser.pl line 110.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 110.
Global symbol "$dashes" requires explicit package name at oschooser.pl line 114.
Global symbol "$dashes" requires explicit package name at oschooser.pl line 115.
Global symbol "$i" requires explicit package name at oschooser.pl line 121.
Global symbol "$i" requires explicit package name at oschooser.pl line 121.
Global symbol "@names" requires explicit package name at oschooser.pl line 121.
Global symbol "$i" requires explicit package name at oschooser.pl line 121.
Global symbol "$i" requires explicit package name at oschooser.pl line 122.
Global symbol "@names" requires explicit package name at oschooser.pl line 122.
Global symbol "$i" requires explicit package name at oschooser.pl line 122.
Global symbol "$i" requires explicit package name at oschooser.pl line 123.
Global symbol "$i" requires explicit package name at oschooser.pl line 125.
Global symbol "$dashes" requires explicit package name at oschooser.pl line 126.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 128.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 129.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 134.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 134.
Global symbol "@names" requires explicit package name at oschooser.pl line 134.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 135.
Global symbol "$name" requires explicit package name at oschooser.pl line 141.
Global symbol "@names" requires explicit package name at oschooser.pl line 141.
Global symbol "$osnum" requires explicit package name at oschooser.pl line 141.
Global symbol "$name" requires explicit package name at oschooser.pl line 142.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 146.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 147.
Global symbol "$ver" requires explicit package name at oschooser.pl line 153.
Global symbol "$name" requires explicit package name at oschooser.pl line 153.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 153.
Global symbol "%names_to_real" requires explicit package name at oschooser.pl line 154.
Global symbol "$name" requires explicit package name at oschooser.pl line 154.
Global symbol "$vnum" requires explicit package name at oschooser.pl line 154.
Global symbol "$out" requires explicit package name at oschooser.pl line 159.
Global symbol "$ver" requires explicit package name at oschooser.pl line 160.
Global symbol "$ver" requires explicit package name at oschooser.pl line 161.
Global symbol "$ver" requires explicit package name at oschooser.pl line 162.
Global symbol "$ver" requires explicit package name at oschooser.pl line 163.
Global symbol "$d" requires explicit package name at oschooser.pl line 170.
Global symbol "$rv" requires explicit package name at oschooser.pl line 172.
Global symbol "$rv" requires explicit package name at oschooser.pl line 174.
Global symbol "$d" requires explicit package name at oschooser.pl line 177.
Global symbol "$d" requires explicit package name at oschooser.pl line 178.
Global symbol "$rv" requires explicit package name at oschooser.pl line 178.
Global symbol "$d" requires explicit package name at oschooser.pl line 178.
Global symbol "$rv" requires explicit package name at oschooser.pl line 181.
oschooser.pl had compilation errors.

Wow! I think that might be more lines than the program itself. Luckily, it’s almost entirely unscoped variables. A quick pass over the code, adding my declarations to the obvious candidates, gets things looking a little better. One tricky bit is the $i loop variables used in for loops. We don’t want those to be declared several times in the code, and we don’t want them to leak out into the scope of the rest of the program. In modern Perl, this is no problem, as you can use the following:

    for(my $i=0; $i<@names; $i++) {
      $cmd .= " ".($i+1)." '$names[$i]'";
      }

And $i will be local to the for loop. I momentarily feared that I’d need to use an outer block to accomplish this, as Webmin needs to be compatible with quite old Perl versions (5.005, for core Webmin, unless Unicode support is needed, in which case 5.8.1 is required), but after downloading and installing Perl 5.005_4, I found that was an unnecessary precaution. The foreach loops can also make use of this convenient feature. If you do happen to be stuck with an even more ancient version that 5.005 (but still higher than 4)–though I can’t imagine how you could, as 5.005 is over nine years old–you can use the following:

  {
  my $i;
    for($i=0; $i<@names; $i++) {
      $cmd .= " ".($i+1)." '$names[$i]'";
      }
  }

Which provides similar private scope for the $i variable, at the cost of three extra lines.

Making it Testable

So far, I haven’t made any changes that are likely to break the code. It’s merely been cleanup and syntax tweaks. But, to accomplish everything I’d like in this exercise, we’ll be doing some refactoring and refining the code. To do that with confidence, it’d be nice to have some tests to insure the code works the same before and after any changes.

Since this is not historically a library, it’s not particularly easy to test. One could write a custom test harness, or use Test::Command, and test its behavior as a whole, but since it’s written in Perl and one of my goals is to make it useful as a library from Perl scripts, I decided instead to make it loadable as a module and use Test::More. A trick that’s very common in the Python world, but doesn’t seem as well-known amongst Perlmongers is a main function which is called if the script is executed independently rather than via use or require. The main function then calls whatever the script would normally do, optionally setting up variables or parsing command line arguments.

So, I added the following near the beginning of the file:

# main
sub main() {
if ($#ARGV < 1) { die "Usage: $0 os_list.txt outfile [0|1|2|3]\n"; }
my ($oslist, $out, $auto) = @ARGV;
oschooser($oslist, $out, $auto);
}
main() unless caller();  # make it testable and usable as a library

I also took this opportunity to add a simple usage message if the command is executed with fewer than two arguments (@ARGV, like all Perl arrays starts counting at 0). I also needed to wrap the main function of the script in a sub block, so that the script doesn’t do anything immediately if loaded as a library.

Make it a Module

Since I want to use this code as a library, I face a choice. The use statement is functionally equivalent to:

BEGIN { require Module; Module->import( LIST ); }

Which means, I suppose, I could keep the name oschooser.pl and use:

require 'oschooser.pl';

We don’t need BEGIN level assurance, since we have no prototypes in this library and only use simple subroutines. But, I find this a bit unsatisfying, since it’s no longer in common use amongst Perl developers, and use provides the ability to export functions explicitly. Test::More has both a use_ok and a require_ok function, so it’s irrelevant from a testing perspective. It’ll probably remain oschooser.pl in Webmin proper, and OsChooser.pm in my Virtualmin installer library, at least for the foreseeable future. Not really a lot of difference between the two.

Some Tests

So, now that we can call the library roughly the way we want, using use, it’s time to write a few tests to be sure things actually work after we begin making more sweeping changes.

We can start with simple compile tests (I usually call these types of tests t/return.t, as they just check to be sure the module returns without error on load and the functions within return the data type that is expected):

#!/usr/bin/perl -w
# These tests just check to be sure all functions return something
# It doesn't care what it is returned...so garbage can still pass,
# as long as the garbage is the right data type.
 
use strict;
use Test::More qw(no_plan);
 
use_ok( 'OsChooser' );
 
isa_ok(\OsChooser::have_tty(), 'SCALAR');
isa_ok(\OsChooser::has_command("cp"), 'SCALAR');

Hmm…OK, so we don’t actually have a lot to test yet, just a couple of utility functions (and I’ve even cheated a little and looked ahead to where I introduced a have_tty function, or this would be an even shorter set of tests). The most important function, oschooser, doesn’t know how to return anything very useful yet. It can only write out its findings to a file. But, since we’re always going to be creating that file, regardless of how nice the module usage becomes, we need to figure out how to test it anyway.

Unsurprisingly, there is already a full-featured module on CPAN for testing the contents of files, called, unlikely though it may seem, Test::Files. So, we’ll just grab that:

$ sudo perl -MCPAN -e shell
 
cpan shell -- CPAN exploration and modules installation (v1.7602)
ReadLine support enabled
 
cpan> install Test::Files
...

And then create as many Operating System definition files as we want in the t directory. We’ll just name them for the OS they represent. This is the kind of testing I love, because the actual test file will be extremely simple, no matter how many operating systems I want to test on:

#!/usr/bin/perl -w
use strict;
use OsChooser;
 
# Get a list of the example OS definition files
opendir(DIR, "t/") || die "can't opendir t/ $!";
my @files = grep { /\.os/ } readdir(DIR);
closedir DIR;
use Test::More qw(no_plan);
use Test::Files;
 
foreach my $file (@files) {
  $file =~ /(.*)\.os$/;
  my $osname = $1;
  my $outfile = "t/outfile";
  OsChooser::oschooser("os_list.txt", $outfile, 1);
  compare_ok("t/$file", $outfile, $osname);
 
  # Cleanup
  unlink $outfile;
}

I love data-driven software, and this is a fun little example of it. We can run as many tests as we want, merely by adding more OS data files–one with the “os” suffix to provide what should be output by oschooser and one to contain the file that oschooser would normally use to identify the OS (/etc/issue, among others), which isn’t yet supported, but I’ll talk about it in the next post. Speaking of being data-driven, I think it’d also be pretty nifty to get the test count from the @files array, rather than using no_plan, but because modules loaded with use are loaded early during compile time (in a BEGIN block, effectively) we don’t actually have anything in @files yet.

However, as mentioned, the oschooser function doesn’t yet allow one to specify the issue file to look at, so no matter how many definitions I provide, it’ll never be able to test anything but the OS the test is running on. Oh, well, for now we’ll just create one OS definition file that matches my current OS, and make it a priority to make the function more testable somehow, possibly via an optional parameter to oschooser.

Alright, so now that we have some rudimentary tests in place, we can break stuff with confidence! We’ll come back to testing again in the near future, since we’re leaving so much untested right now.

Plain Old Documentation

I’m going to take a quick detour now that we’ve got some basic tests in place. Testing is one practice that most developers agree makes for great code, and the other practice that most folks can agree on is documentation.

Since this is such a simple piece of code, and was intended exclusively for use during installation of Webmin and Usermin, Jamie never really documented it. Now that I’m forcing it to be useful in other locations, and having some fun giving it a modern Perl face lift, it’s as good a time as any to add some documentation. POD isn’t the only documentation format usable within Perl code, but it is, by far, the most popular, and it has lots of great tools for processing and testing coverage, so that’s what Jamie recently chose for use in documenting the Virtualmin API. It’s also easy to learn, and results in text that is pretty readable even before processing.

I’m not sure of the recommended practices for documenting scripts that work on both the command line and as a module, but here’s what I came up with:

=head1 OsChooser.pm
 
Attempt to detect operating system and version, or ask the user to select
from a list.  Works from the command line, for usage from shell scripts,
or as a library for use within Perl scripts.
 
=head2 COMMAND LINE USE
 
OsChooser.pm os_list.txt outfile [auto]
 
Where "auto" can be the following values:
 
=over 4
 
=item 0
 
always ask user
 
=item 1
 
automatic, give up if fails
 
=item 2
 
automatic, ask user if fails
 
=item 3
 
automatic, ask user if fails and if a TTY
 
=back
 
=head2 SYNOPSIS
 
    use OsChooser;
    my ($os_type, $version, $real_os_type, $real_os_version) =
       OsChooser->oschooser("os_list.txt", "outfile", $auto, [$issue]);
 
=cut

Pretty simple, but covers the basics.

Next Time

Unfortunately, the code is now longer and probably a little less readable than before! It’s probably more robust to changes, since it now has reasonably scoped variables. And it’s more friendly to others who might want to use it, due to the new documentation and the ability to use it as a library in Perl or as a command in shell scripts.

Next time we’ll start in on the refactoring, and we’ll also write some more tests. This is turning into a real challenge, due to the data-driven nature of the script, and the fact that it’s somewhat hardcoded to look for OS data in very specific locations. Since, a big part of what I want to test is in the os_list.txt file, we don’t have the luxury of just saying, “It’s configuration…we’ll just make a special version for testing purposes.” We’ll have to get far more clever.

Webmin::API: Using Webmin as a library

Webmin is perhaps the largest bundle of system administration related Perl code in existence (outside of CPAN, of course), much of which is unavailable anywhere else.  I often find myself wishing for a function or two from Webmin in my day-to-day Perl scripting.  Historically, one could use Webmin functions by first pulling in all of the bits and pieces manually, and running a few of the helper functions.  For example, at Virtualmin, Inc. we use this bit of code to start up the configuration stage of our install scripts:

#!/usr/bin/perl
$|=1;
# Setup Webmin environment
$no_acl_check++;
$ENV{'WEBMIN_CONFIG'} ||= "/etc/webmin";
$ENV{'WEBMIN_VAR'} ||= "/var/webmin";
$ENV{'MINISERV_CONFIG'} = $ENV{'WEBMIN_CONFIG'}."/miniserv.conf";
open(CONF, "$ENV{'WEBMIN_CONFIG'}/miniserv.conf") || die "Failed to open miniserv.conf";
while(<CONF>) {
  if (/^root=(.*)/) {
    $root = $1;
    }
  }
close(CONF);
$root ||= "/usr/libexec/webmin";
chdir($root);
require './web-lib.pl';
init_config();

Wow.  That’s a lot of extraneous crap just to make use of Webmin functions.  Not all of that is necessary in every script that wants to use Webmin functions, but it’s always something I have to refer to the documentation for.

So, I’ve been bugging Jamie for some time to make a simpler way to get at the Webmin API, and he’s just released the Webmin::API Perl module.   To use it, you’ll first need Webmin installed.  There’s an RPM, deb, tarball, and Solaris pkg, so it’s easy no matter what UNIX-like OS you run (it’ll also run on Windows, but only in relatively limited fashion), and then you can install it like any other Perl module:

# tar xvzf Webmin-API-1.0.tar.gz
# cd Webmin-API
# perl Makefile.PL
# make install

Once that’s done, you can make use of the entirety of the web-lib.pl, plus the libraries for all of the Webmin modules.  For example, one could access all of the Webmin variables, like %gconfig, as well as all of the web-lib.pl functions, such as ftp_download (pure Perl FTP client), kill_byname (like killall), nice_size (return a number in GB, MB, etc.), running_in_zone (detects whether it's running in a Solaris Zone), etc.

So, making an application that downloads and does something with remote files is trivial, for example.  But, probably more interesting, is that once Webmin::API has been loaded, you can make use of the foreign_require function, which is used to access any available Webmin module function library.

For example, if I wanted to make sure Postfix was configured to use Maildir mail spools, I could do the following:

foreign_require("postfix", "postfix-lib.pl");
postfix::set_current_value("home_mailbox", "Maildir/", 1);
postfix::reload_postfix();

That’s it.  No need to worry about parsing the file and no regex needed.  You don’t need to figure out where the Postfix main.cf is located (assuming Webmin is configured correctly), or what the proper way to restart the service is.

One common, and surprisingly complicated, task is setting up initscripts to start on boot.  It seems like every Linux distribution uses a slightly different directory layout, slightly different scripts, and different tools for managing the rc directories and files.  Webmin knows about the vast majority of those quirks, and provides a uniform interface to all of them, and this functionality is exposed to scripts via the init module.  For example, I could enable Postfix on boot with the following:

foreign_require("init", "init-lib.pl");
init::enable_at_boot("postfix");

There is one unfortunate caveat to this: You have to know the name of the initscript.  On all of the systems I work with, this is pretty consistent across most services, with the exception of Apache.  One Red Hat based systems the Apache services is called httpd, while on Debian/Ubuntu systems it is apache2.  Some systems also call it apache.

Working With the Linux Firewall

One of the most powerful Webmin modules is the Linux Firewall module, which manages an iptables firewall.  It is nearly comprehensive, covering many of the advanced stateful capabilities, as well as logging and creation of and management of arbitrary chains.  We can make use of the basic functionality of the module by importing the firewall library.

foreign_require("firewall", "firewall-lib.pl");

Once imported, we have access to the get_iptables_save function, which imports any existing rules from the system default iptables save file into an array.  You can then work with them using standard Perl data management tools like push and splice.

Say you want to open ports 10000 and 20000 (for Webmin and Usermin, of course).  Maybe you also want to make sure ssh (port 22) is available for those times when you need to hit the command line.  The simplest is probably to drop them into an array (so you can add new ports later without having to read code):

#!/usr/bin/perl
 
use Webmin::API;
foreign_require("firewall", "firewall-lib.pl");
use warnings;
 
my @tcpports = qw(ssh 10000 20000);
my @tables = &amp;firewall::get_iptables_save();
(my $filter) = grep { $_->{'name'} eq 'filter' } @tables;
if (!$filter) {
  my $filter = { 'name' => 'filter',
              'rules' => [ ] };
}
 
foreach ( @tcpports ) {
  print "  Allowing traffic on TCP port: $_\n";
  my $newrule = { 'chain' => 'INPUT',
               'p' => [ [ '', 'tcp' ] ],
               'dport' => [ [ '', $_ ] ],
               'j' => [ [ '', 'ACCEPT' ] ],
             };
  splice(@{$filter->{'rules'}}, 0, 0, $newrule);
}
firewall::save_table($filter);
firewall::apply_configuration();

This reads the existing rules, and adds new ones, saves it out, and applies the new rules. The rules that this creates are identical to what you would get if you’d entered the following on the command line on a Red Hat based system:

iptables -I INPUT -p tcp --dport ssh -j ACCEPT
iptables -I INPUT -p tcp --dport 10000 -j ACCEPT
iptables -I INPUT -p tcp --dport 20000 -j ACCEPT
service iptables save

Now, of course you could do all of that with backticks and subsitution, but you’d have to add a bunch of additional logic to figure out whether to use iptables-save, service iptables save, or some variant of the former with an option or two (Debian and Ubuntu have a rather complex set of firewall configuration files, and thus the appropriate iptables save file may not be immediately obvious).  And, dealing with things programmatically is more difficult, if you want to do something interesting like “only add a rule if these two other rules already exist, otherwise add the following two rules”.   And, reading and parsing the rather complex save file and writing it back out yourself can be a challenge (feel free to steal the Webmin code for it, if you prefer not to need all of Webmin).

Known Issues

This Perl module is new, so it’s pretty safe to say there is room for improvement.  The biggest is that only the core Webmin web-lib.pl and ui-lib.pl functions are documented, and thus the vast majority of functionality found in Webmin you’ll have to parse out from the relevant modules yourself.  I plan to spend some time adding POD documentation to each of those libraries in the not too distant future, but in the meantime, the best documentation is the source itself.  Luckily, every library has an accompanying working example application in the form of the module that it is part of.

Another issue is that Webmin is full of old code.  It’s a ten year old codebase…and much of it isn’t “use strict” or even “use warnings” compliant.   You can, of course, trigger warnings after “use Webmin::API” and it works fine.  See my final iptables example for that kind of usage.  Strict is only usable, even after the import of Webmin, if you disable many types of check.  This is another issue I’ll spend some time on in the future.

In the meantime, there’s a lot of great functionality that’s just been made a little easier to make use of.  I’ll be writing several more articles with examples of using this API in the near future.  Specifically, the next installment of my series on Analysis and Reporting of System Data  will make use of the Webmin System and Server Status module to build a flexible ping monitoring and reporting tool in just a few lines of code.

One config file to rule them all

Configuration files are a boring necessity in software development. Parsing existing configuration files is a necessary aspect of almost any systems automation task. I regularly need to read and write configuration files from different languages, as I have simple maintenance, startup, and installation scripts written in BASH, larger Webmin-related tools in Perl, and stuff related to our website written in PHP. Of course, there are some great configuration file parsers for Perl in CPAN, but if you need a highly portable script and you don’t want your user to have to know anything about CPAN, it makes sense to build your own.

Luckily, in all three of these languages, plus Ruby and Python (other favorites of mine), simple configuration files can be easy, if you choose the right format.

Start from the Least Common Denominator

The least capable language in this story, at least with regard to data structures, is probably BASH, so we’ll start by creating a configuration file that’s easy to use with BASH. The obvious choice is a file filled with simple variable assignments, like so:

apache.config

# A comment
show_order=0
start_cmd=/etc/rc.d/init.d/httpd start
mime_types=/etc/mime.types
apachectl_path=/usr/sbin/apachectl
stop_cmd=/etc/rc.d/init.d/httpd stop
emptyvalue=
# A blank line too..
 
max_servers=100
test_config=1
apply_cmd=/etc/rc.d/init.d/httpd restart
httpd_path=/usr/sbin/httpd
httpd_dir=/etc/httpd
#  A comment with an=sign

This file is valid BASH syntax–you could run this directly with /bin/sh apache.config and it would return no errors (though it wouldn’t do anything, because the values are not exported, so they are only in scope for the split second it takes BASH to parse the file. Because it’s BASH syntax, empty lines are ignored, and any line that starts with a # is a comment and also ignored. Empty values are also legal, so we need to accommodate lines that have only a key and no value. Also because this is a valid BASH script, we can make use of these variables in our scripts easily by sourcing this file. In shell scripts this is done using the dot operator ( . ), like so:

. apache.config

After this, each of the values in the apache.config file are accessible by their names. There are some caveats that make this a less than ideal practice for anything more complicated than a small script. The variables pollute the namespace when pulled in this way. So, if you later wanted to use $apachectl_path as a variable for some other purpose, for example, you would overwrite the existing assignment, and cause possibly difficult to diagnose errors. BASH doesn’t have support for complex data structures, so there isn’t much we can do about this, without introducing quite a lot of complexity, so we’ll take our chances and keep our scripts short and simple.

Getting the values into a Perl data structure

While our configuration file is not valid Perl syntax, Perl still has plenty of tools for working with this kind of file. After all, Perl was born to pick up the ball where shell scripts fumbled (and eventually evolved into a hodge podge of every great, and some not so great, ideas in programming languages from the past couple of decades), so it’s natural that it would have the ability to do the same sorts of things as a shell script.

But, since our configuration file is not valid Perl syntax, we can’t simply call do apache.config; as we would to import another Perl script. We’ll have to parse it into a data structure (which is better programming practice, anyway, as mentioned above). One way to do this would be a while loop, like so:

my $file = "apache.config";
my %config;
open(CONFIG, "&lt; $file") or die "can't open $file: $!";
while () {
    chomp;
    s/#.*//; # Remove comments
    s/^\s+//; # Remove opening whitespace
    s/\s+$//;  # Remove closing whitespace
    next unless length;
    my ($key, $value) = split(/\s*=\s*/, $_, 2);
    $config{$key} = $value;
}
 
# Print it out
use Data::Dumper;
print Dumper(\%config);

Now, we can access the values in our configuration file from the %config hash, such as $config{‘apachectl_path’}. Another option, if you’re feeling particularly idiomatic, is to use map:

my $file = "apache.config";
open(CONFIG, "\&lt; $file") or die "can't open $file: $!";
my %config = map {
      s/#.*//; # Remove comments
      s/^\s+//; # Remove opening whitespace
      s/\s+$//;  # Remove closing whitespace
      m/(.*?)=(.*)/; }
      ;
 
# Print it out
use Data::Dumper;
print Dumper(\%config);

So, what’s the benefit to this latter example? Nothing major, it’s just another way to approach the problem. It’s a couple of lines shorter, but more importantly it has fewer temporary variables, which can be a source of errors in large programs. The multiple substitution regular expressions I’ve shown above in either example could be reduced to a single line, but I believe this is more readable, and according to the Perl documentation breaking the tests out into single tests is faster than having multiple possible tests in a single substitution. Some folks also find long regular expressions difficult to scan.

But, I only like Ruby!

OK, so you want to do it in Ruby. Ruby has a lot in common with Perl, so it’s actually pretty similar, though a bit more verbose. Ruby fans seem to discourage regular expressions, though it is a core part of the language and it has roughly the same regex capabilities as Perl, so I’ve only used one (I guess I could have gotten rid of it somehow…but I got tired of searching for the non-regex answer and punted):

config = {}
 
File.foreach("apache.config") do |line|
  line.strip!
  # Skip comments and whitespace
  if (line[0] != ?# and line =~ /\S/ )
    i = line.index('=')
    if (i)
      config[line[0..i - 1].strip] = line[i + 1..-1].strip
    else
      config[line] = ''
    end
  end
end
 
# Print it out
config.each do |key, value|
  print key + " = " + value
  print "\n"
end

Same end result as the Perl versions above: A config hash containing all of the elements in our configuration file.

What about those web applications written in PHP?

Two of the websites I maintain (Virtualmin.com, and this site) are written in PHP. One is a Joomla application with numerous extensions and custom modules and components, the other is a mildly customized WordPress site. In the case of Virtualmin.com, we’re developing a number of applications that have both Perl components for the back end work and PHP components for the web front end, so sharing configuration files can be useful. Webmin, conveniently enough, already uses shell variable key=value style configuration files, so everything we do is already in this format.

So, let’s see about getting these configuration files into a PHP data structure. PHP isn’t quite as rich as Perl in its data manipulation capabilities, but it did inherit quite a few of the same tools from Perl, so our solution in PHP looks pretty similar to the while loop version above, though it is a bit more verbose due to the keyword heavy nature of PHP (Perl is often accused of having too much syntax, and PHP has way too many keywords):

$file="apache.config";
$lines = file($file);
$config = array();
 
foreach ($lines as $line_num=&gt;$line) {
  # Comment?
  if ( ! preg_match("/#.*/", $line) ) {
    # Contains non-whitespace?
    if ( preg_match("/\S/", $line) ) {
      list( $key, $value ) = explode( "=", trim( $line ), 2);
      $config[$key] = $value;
    }
  }
}
 
// Print it out
print_r($config);

Hey, what about snake handlers?

Of course, it can also be done in Python. As with the Ruby implementation, I’m not certain this is the best way to do it, but it works on my test file.

import sys
config = {}
 
file_name = "apache.config"
config_file = open(file_name, 'r')
for line in config_file:
    # Get rid of \n
    line = line.rstrip()
    # Empty?
    if not line:
        continue
    # Comment?
    if line.startswith("#"):
        continue
    (name, value) = line.split("=")
    name = name.strip()
    config[name] = value
 
print config

Or, as dysmas suggested on Reddit, a more idiomatic version would be:

config = {}
 
file_name = "apache.config"
config_file= open(file_name)
 
for line in config_file:
    line = line.strip()
    if line and line[0] is not "#" and line[-1] is not "=":
        var,val = line.rsplit("=",1)
        config[var.strip()] = val.strip()
 
print config

So, now we’ve got a config associative array filled with all of our values in all of our favorite languages (except BASH, which gets straight variables). Assuming we use a common file locking mechanism, or always open them read only, we could even begin to use the same configuration files across our BASH, Perl, Ruby, Python, and PHP scripts independently but simultaneously.

What’s the point?

This isn’t just an academic exercise. The simple examples above make up the early start of a cross-language set of tools for systems management.

With these simple parsers, we can build tools that use the best language for the job, while still leveraging some interesting knowledge contained in Webmin’s configuration files (which are in this key=value format). Webmin supports dozens of Operating Systems and hundreds of services and configuration files, so the config files in a Webmin installation (usually found in /etc/webmin) contain a huge array of compatibility information that would take ages to gather. If you need to know how to stop or start Apache on Debian 4.0, or on Solaris, or on Red Hat Enterprise Linux, you’d have to check an installation of those systems or search the web or ask someone who has one of those systems handy. Or, you could check the Webmin configuration file, and get the same data for all of the Operating Systems Webmin supports. It’s a pretty valuable pile of data. Imagine writing a script for your own favorite OS, and then being able to hand it to anyone that happens to have Webmin installed, regardless of their OS and version. Or, if they don’t have Webmin installed, you could provide a template configuration file that they could fix for their OS and version, addressing both situations as simply as possible.

Not the only configuration file format

Of course, this isn’t the only configuration file format out there, or even the best. Python users really like INI files, and I can’t argue with them. When I was writing Perl and Python predominantly, I used the Config::INI::Simple module from CPAN and ConfigParser for Python so I could share configuration between my various software easily (I was generally writing a Webmin front end in Perl to a Python back end application). That worked great. So, I’m not arguing you ought to be using key=value configuration files for everything. But being able to read them makes a lot of portability data available to you for free.

Next time I’ll wrap a couple of these routines up into friendly libraries for easy use, and add some tests to be sure we’re doing what we think we’re doing.

Getting a great logo: reducing the field

We’re holding a logo contest over on SitePoint. We mentioned it in an article a few days ago and since then it has become the most popular contest running right now on the site! Awesome. That’s the good news. There’s also great news: The designers entering the contest are really good! There’s at least half a dozen designs that we’d be proud to call our logo, and at least a dozen of the designers are folks I would love to work with in the future. Hundreds of entries would do us no good if they all sucked, but these guys are doing really solid work.

We’re about 30 hours from the end of the contest, so I wanted to post a summary of the work so far, and offer some advice to the designers, as well as offer up our thought process on why we like the logos that we like, and a few for logos we want to like, but don’t, and why. This is, by no means, an exhaustive list. For that you’ll need to check out the contest itself and the feedback on each of the entries.

Our Judging Guidelines

These are the guiding principles in our decision making process. We don’t all agree on how they should be reflected in the end product, but we all agree that these are right for the Webmin logo. It helps to know, in advance, the general feel of the branding you want, as it makes it really easy to rule out some possibly great executions of ideas that don’t fit. I think this is one of the leading causes of a failed branding effort; if you don’t know what you want, you’ll almost certainly not get it. So here’s the guidelines we’re using in our judging and advice to the designers:

  • We’re an Open Source project, so super corporate looking logos probably aren’t right.
  • We’ll be printing T-shirts, so too much complexity is a negative. Costs more to print, and looks stupid when screen-printed. It also leads to a weaker brand image…takes a long time to remember a really complex logo, but a simple one can stick on first or second viewing.
  • Colors aren’t set in stone. We’ll have the SVG vector version, and can change the colors, as needed. Though poor color choices might indicate a lack of skill on the part of the designer, and we might be wasting our time trying to guide them towards perfection if the logo has problems other than color. I’ve noticed some of the designers take advice much better than others. Some of these folks are pros, while others are well-meaning amateurs, and one of the things that separates the pros from the amateurs is an uncanny ability to read my mind. We aren’t going to miss out on a great logo just because it is by an amateur, but we’re also going to choose a perfect execution of a good idea over a mediocre execution of a great idea.
  • Be gentle, and have fun. We’re encouraging everyone to get involved, so we’ve got a few entries that are, frankly, not great. You can be harder on an entry that you really like a lot, because it’s easy to soften the criticism with praise. But, if one of us picks something that another hates, be gentle in vetoing it.
  • Jamie has veto power (we all do, but Jamie really does). It’s his baby, and he gets to say no to any logo, no matter how much one of the other judges loves something.

The Top Ten (give or take a few)

This is a bunch of logos that Jamie or I loved. Kevin is reserving judgment until the end, so we’ll have to wait for the professional opinions, but here’s where we stand, right now. This is definitely not an exhaustive list of the good to great logos in the contest, but it’s the ones that we picked out as being our favorites. Some of these won’t actually be going into the final round, due to a veto by Jamie or I, but these are all great by either my estimation or Jamie’s, so they’re worth commenting on.

Modern stylized spider web by vjeko

http://contests.sitepoint.com/contests/3497/entrants/206414#entry206141

I like everything about this one. The spider web is clearly a spider web, it feels kinda like the Pentagon of spider webs: a place where Serious Internet Business takes place. The font is fun and the colors are soothing and modern. It scales small and large with no loss of impact and handles limited colors like a champ. Jamie also likes this logo. He’s unsure of hanging on to the spider or spider web branding of the old Webmin logo, so many of my favorites are in limbo (most of my favorites are spider related). But the strength of this logo won Jamie over, and he’d be happy with this concept.

vjeko deserves special mention for poking fun at me with this variant that adds a fitting tagline:

http://contests.sitepoint.com/contests/3497/entrants/206414#entry207627

Fat, friendly spider by Haetro

http://contests.sitepoint.com/contests/3497/entrants/181135#entry205608

I love this spider! Every time I see it, I like it more. It’s got real personality, and with only one color. It looks great in any color, and even with fonts I don’t care for, like the one in this particular instance. Some of the other variants of this logo have better fonts, but miss out on the purity of this one. I like the single color, and I like the square spider icon better than the later instances that round the spider or add more colors to the Webmin text (though other instances do have better fonts). Haetro has a great sense of style and a minimalist approach that I find very appealing.

Unfortunately for me, and for this design, Jamie vetoed it. The white space is pretty deeply offensive to him, and when scaled up he finds the spider frightening (I can see that…the eyes get really scary when he’s big). That said, Haetro is among the best designers in this contest, and I hate that none of his entries will make it to the final round. So, an interesting lesson to learn from this is that perhaps some of the most compelling images can also be the most off-putting. I asked around about this one, and it’s a very polarizing logo, you either love it or hate it.

Solid Webmin surrounding racks full of servers by RetroMetro Designs/Steve

http://contests.sitepoint.com/contests/3497/entrants/163719#entry208658

This one is out of left field, and that’s a big part of why I like it. It’s unlike any other entry, so far, and gets bonus points for that originality. The feel I get from the green blocks in the center is clearly “look at these modern server racks filled with systems doing Serious Internet Business”. And the big fat WEBMIN sitting on either side makes it real clear who’s in charge. It’s simple, clean, clearly relevant, reduces nicely, and looks good. Steve’s entry prior to this one is really nice, too, and in fact, Jamie likes it better. Steve has done some revisions of that idea, which Jamie and I both like better, so it may find its way into the final round.

Interestingly, while Jamie and I both like Steve’s sense of style, we diverged on which of his designs we like best. But, at least one of Steve’s entries will be in the final round.

Give me a “W”, Give me an “M”, What does that spell? Spider! by highendprofile

http://contests.sitepoint.com/contests/3497/entrants/195360#entry205913

Awesome execution on the idea of a spider built from the letters W and M. This is a gorgeous illustration. I wasn’t so sure about highendprofile’s skills based on his first entry, which was a nice idea but not very well executed, but this spider immediately blew me away. Beautiful execution and the spider is among the best illustrations in the contest. I don’t love the font on this one–it’s a bit tall and thin, but the colors are nice, and the spider is what draws the eye, so even with a not quite right font, I really like this logo.

This is another of the spider-based logos that has gotten the axe by Jamie. In this case, the cuteness that I love is a turnoff for Jamie. It won’t, unfortunately, make it into the final round, but highendprofile is a great designer, and I wish we had another idea or two from him in the contest.

Ooh, shiny spider makes me happy, by demonhale

http://contests.sitepoint.com/contests/3497/entrants/108155#entry205579

What a champ. Give demonhale an idea and he runs it all the way in. This is a great spider illustration. Cute and shiny, very modern, very friendly. The font looks spidery, and the whole thing just screams “New Technology!” Great color choice, but color isn’t necessary for this one to look good. I like that the spider is hanging by a thread…perhaps going some place new. And, who doesn’t love shiny things?

Jamie, surprisingly, did not veto this spider. It’s shiny and serious enough to pass the “is it too cute?” muster, and it’s also a really simple design. The colors are subdued and the execution is clean. So, shiny spider is going to the final round.

Webmin is like a box or a building block, by joswan

http://contests.sitepoint.com/contests/3497/entrants/178126#entry207368

This is another nice idea, that breaks from the old spider web and spider tradition. A box built from the letters W and M, with a nice solid font and cool colors. It’s quite pleasant to look at, and has some relevance for what Webmin does. Boxes don’t have a lot of personality…but it looks good nonetheless. It degrades nicely, and makes for a good favicon and icon version.

Jamie doesn’t love the colors here, and I have to agree. Orange and blue have a feel that is distinctly non-high tech. But joswan is an excellent designer, and does really nice work, so we can probably chase him into getting the colors right.

The fleur de Webmin, by ulahts

http://contests.sitepoint.com/contests/3497/entrants/133542#entry206250

This designer has submitted nothing but great entries, but this is my favorite. The WM here is subtle and pleasant, the Webmin is bold and distinctive in red and gray. Nothing says Serious Internet Business like some sturdy red text. A stylized WM doesn’t mean much, but it sure looks nice as hell. It feels like it’s got the weight of Webmin’s ten year history behind it (ten years in Internet time is like 20 generations, so this is kinda like a family crest or family plaid to represent the Webmin family of products for the next 20 generations, or more). This is a very distinctive logo.

Jamie found this one a bit boring, but likes some of ulauts other work. We might end up pulling another of his logos into the final round, instead of this one.

Life saving technology, by dumples

http://contests.sitepoint.com/contests/3497/entrants/188768#entry206787

This designer came out of left field with this one. It’s his only entry, but man did he ever knock it out of the park! Jamie and I both like this one a lot. The bubbly WM is just very pleasing and it feels familiar in a good way. I trust this logo. It feels kinda like a lifeguard. And, I can even kinda see one swimmer being helped to shore by another, now that I try to figure out why I think “lifeguard” when I look at it.

So, dumples, has swept in with one lone entry, and found himself as a shoe in for the final round. You don’t have to do lots of entries, if the one you run with is great, and this one is great.

Infinity needs system administrators, by fbarriac

http://contests.sitepoint.com/contests/3497/entrants/119506#entry206689

Another one that Jamie and I agree on. We like the subtle use of color here, and the lovely dark gray Webmin in a round and friendly font. The infinity shape doesn’t mean much in reference to Webmin, except maybe that there are seemingly infinite things Webmin can do, but it looks good as hell doing it. I can picture this on great looking T-shirts (that cost an arm and a leg to print, because it requires shading to look this good), and it really shines on the web.

Webmin is sorta like a castle…or maybe a rook in a game of chess, by rust3dboy

http://contests.sitepoint.com/contests/3497/entrants/83333#entry207364

Jamie likes the colors and sturdiness of this one. But it’s one that I vetoed. I’m not wholly aghast at the “WM as castle or rook” idea, but this execution feels like a logo for a BBS from the 80s. I think I broke several federal laws in order to call that BBS for free when I was thirteen. Jamie might have fond memories of calling that BBS, too, and that may be clouding his judgment. So, this one won’t be in the running, but another of rust3dboy’s logos will be, as we both really like most of his ideas…in particular the next one in the list.

Do you have a flag? by rust3dboy

http://contests.sitepoint.com/contests/3497/entrants/83333#entry207307

In general, a great designer is one that can produce numerous really great logos, and rust3dboy has done that. We don’t all love all of them, but at least a couple of his entries are among our favorites. This is another good, abstract “it’s a W and an M!”, concept, implemented by a real pro. I like the color symmetry here, and the WM looks kinda like a flag (I’ve always thought people who have a Black Flag tattoo are super cool, and this looks kinda like the Black Flag logo). Nothing to complain about here.

Links in a chain, open source style, by BeeOsx

http://contests.sitepoint.com/contests/3497/entrants/170918#entry206768

This is another the both Jamie and I really like. The colors are beautiful and professional. Very modern feel all around. We have no clue what those dots and lines are all about. It’s like they’re being dropped into a shredder or something, or maybe it’s a wood chipper and those are the chips flying out. I think my favorite thing about this one is actually the colors, and the really professional execution rather than the idea itself. It just looks really clean. BeeOsx is a really good designer with some interesting ideas, and I suspect at least this entry will be in the final round.

What else?

So those are the ones that bubbled to the top in the first round of discussions. Except for three or four vetoed entries those will definitely be considered in our final round of judging. There are a few that have come up since we had our discussion a couple of days ago that deserve special mention, as they are interesting new entries.

Swooshy wavy W and M, by babitaverma

http://contests.sitepoint.com/contests/3497/entrants/188315#entry211034

This one is nice and subtle. I have no idea what the waves mean here, but they look awesome, and the color scheme is amazingly pretty. The font is a bit squat, but otherwise this is a great, simple, idea executed brilliantly. I’m going to unilaterally pull it into the final round of judging, but it may bounce right back out if Jamie or Kevin veto it.

This designer showed up after the first round of judging, so he missed out on getting feedback from Jamie, but I think at least two of his designs are final round calibre entries.

Butterflies, by babitaverma

http://contests.sitepoint.com/contests/3497/entrants/188315#entry212204

Another by the same designer as the previous one. If we’re going to go with a new mascot, rather than a spider or web, butterflies would be a great choice. This illustration is lovely and simple, and looks great in all sorts of colors.

WM is a box, by RetroMetro Designs/Steve

http://contests.sitepoint.com/contests/3497/entrants/163719#entry210713

This is the other entry by Steve that Jamie and I both liked, but had some reservations about during early discussion. Steve touched up the problem spots, and now it looks pretty darned good. Definitely a contender.

WM is another kind of box, by DaHoNK

http://contests.sitepoint.com/contests/3497/entrants/38446#entry210527

This is another early entry that we had some reservations about, but the designer cleaned up those problems, and now it looks really good. This one takes such a different route on color scheme that it’s notable for that reason alone.

Tell us what you think!

What’s your favorite logo, so far? Any gems we’ve missed that you think ought to be make it into the final round? Let us know! This is the future of Webmin’s branding we’re talking about here.

Analysis and Reporting of System Data Part 1

There are a few basic elements to maintaining and administering systems: configuration, software management, data integrity and availability, and monitoring and reporting. This article introduces a number of tools for the last of those components, as well as presents some simple ways to create custom tools to report on data specific to your environment. There are dozens of great Open Source tools for gathering and presenting data, and so this series merely scratches the surface, but it provides a good introduction to some of the major system data analysis problems and presents some solutions.

Before trouble starts

Who, What, When, Where, Why and How

The six W’s (yeah, I’m not sure why “how” is one of the Ws, either) of reporting also apply to systems data. You want to know:

Who has been interacting with your server and services.

What they did.

When they did it, so you can determine if something they did is related to problems on the system.

Where they were coming from, just in case they aren’t who they claim to be.

Why? OK, so systems data probably can’t tell you why someone did something. You’ll have to ask them. But, with the right tools you’ll know who to ask and what to ask them, if anything funny does happen on your systems.

And, how any problems came about, so you can prevent them in the future. In short, the goal of all of this analysis and reporting on systems data is to keep your sysadmin house in order.

Oops.  Something went wrong.

The Basics

In the spirit of starting from first principles, we’ll begin this little exercise with the rudimentary tools that every system administrator ought to know a bit about: grep and tail.

While there are lots of automatic tools that provide graphs and charts and doohickeys that you can click or drag or hover over for hours of fun, odds are very good that some day, you’ll need to find out something very specific about a service on your system. Do you really want to schlep all over the Internet looking for just the right log analysis tool to find out whether that important message your boss sent to your companies biggest client was actually delivered? Of course not! Your boss is breathing down your neck right now. This is a job for grep!

grep is a search tool. It finds lines in a text file that match a regular expression1 and prints it to STDOUT. Like all UNIX command line tools, it can easily be combined with other tools for maximum awesomeness. So, let’s see grep in action, eh?

Find the boss’ email to badass@superhappymegacorp.com. Your boss (wimpy@thefacelesscorp.com) sent it out yesterday and he still hasn’t gotten a reply!

grep "to=<badass@superhappymegacorp.com>" /var/log/maillog</badass@superhappymegacorp.com>

Assuming your boss actually sent the message, this will print out something along the lines of:

Sep 24 23:04:52 www postfix/smtp[3208]: 93498290E97: to=, relay=none, delay=42281, 
status=deferred (connect to mail.superhappymegacorp.com[192.168.1.100]: Connection timed out)

Aha! The superhappymegacorp.com mail server isn’t responding. The message didn’t go through yet, but it’s not our fault! Ass covered. Rest easy and reward yourself with another one of those delicious cupcakes that cute secretary brought in this morning.

Just when you begin to think the rest of your day is going to be easy, in comes the web designer. She’s thoroughly in a panic because one of her off-shore contractors got the syntax wrong in an .htaccess file and exposed a directory filled with sensitive files. It’s now been fixed, but she needs to know if anyone outside of the company accessed those files during the couple of days while they were exposed. Hmmm…sounds like another job for grep. But, we need to find entries that don’t match a particular pattern. We’ll use the “-v” option to negate the pattern.

grep -v ^192\.168\.1\. /var/log/httpd/access_log

This assumes 192.168.1. matches our local company subnet. The “^” indicates that the pattern should appear at the beginning of a line, which in the Apache common log format is where the client IP appears. Because grep uses regular expressions, and the period “.” has special meaning (it means “match any single character”), I’ve used a backslash “\” to escape the periods in the IP. It would match anyway, because a period matches “any single character”, but it could lead to false positives (or negatives in this case) because 192.168.100.1 would match even though it isn’t in the 192.168.1.0/24 network.

Next up, tail, a nifty little tool that I use many times every day. In its simplest form it simply displays the last 10 to 20 lines of a file. Because log files on a UNIX system always append new entries to the end of the file, this will always show the most recent items in the log. It’s very useful for interactively debugging problems.

Even better, modern tail implementations include the “-f”, or “–follow”, option, which prints the log entries as they are added. So, if I were debugging a particularly ornery mail problem, I might watch the maillog with “tail -f” while making requests. Of course, if I’m looking at the logs of a very active server, I might want to only see very specific entries. Say, I’m not sure why a particular mailbox isn’t receiving mail. We can combine tail and grep, like so:

tail -f /var/log/maillog | grep info@thefacelesscorp.com

Now, when I send an email to info@thefacelesscorp.com, I’ll see the related entries in the maillog (of course, in some cases, it won’t show all related entries…you might then need to pick out a message ID and grep the whole log based on that ID).

Next week, we’ll cover using Perl to extract useful information from your system and build time series graphs from the data.

See also

grep documentation

grep at Wikipedia

tail documentation

tail at Wikipedia

  1. Regular expressions, or regexes, are a syntax for advanced pattern matching. There is a de facto standard known as egrep, or extended grep, style regexes. This further evolved into Perl style regexes, which are used by many other languages and tools, via the pcre (Perl Compatible Regular Expressions) library. The Perl regex documentation is among the best on the subject. Jeffrey Friedl’s Mastering Regular Expressions takes the subject to the next level, and covers grep, egrep, sed, Perl, and much more. []

Open Source and Business: a Precarious Partnership

The business world has thoroughly embraced, by some definition of “embraced”, Free and Open Source software. No longer is it the sole province of the barefooted ideologues, MIT AI lab vagabonds, or bearded Berkeley beatniks. The biggest businesses in the world now have an Open Source deployment plan. Even Microsoft, the historical protagonist in the FOSS (Free and Open Source Software) story, has begun making vaguely conciliatory gestures towards the community alongside its traditional FUD (Fear, Uncertainty, and Doubt) and Embrace and Extend tactics, because their biggest customers have started demanding better interoperability, better standards compliance, and more transparency: features that are core beliefs of the FOSS world. So, big money has come to FOSS, but so far, the majority of big winners have been traditionally proprietary vendors adding FOSS solutions to their portfolio.

There are a few success stories from pure play OSS companies. Red Hat Software is one, Mozilla is another. But, I see a lot of great Open Source projects floundering with poor business plans and even poorer execution on those plans, wasting time that could have been spent developing and spending it instead on plugging the holes in the business. Most Open Source entrepreneurs start out with the goal of being able to work full-time on their project, but end up having even less time for the project and being broke, to boot. So, I’d like to offer some advice to budding FOSS entrepreneurs.

I’ve now built two businesses based on Open Source software, and I’ve learned a few things about what works and what doesn’t. Many of the “standard answer” solutions to the problem of making money on something that is available for free have died out over the years, as it’s become apparent, at least to the folks who’ve tried them, that they simply don’t work. I’d like to talk about some of those failed models, as well as some of the models that do seem to be working across a wide range of Open Source based businesses.

Service and Support (or, “what’s wrong with being a consultant?”)

This is the old standby for lots of Free Software and OSS purists. The argument goes, “don’t charge for the software, charge for the service and support that you offer”. Most of the people putting forth this as a viable business model haven’t actually built an Open Source based business based on this model. It is possible, but the scale on which you’ll be operating will never be very large.

Here’s why:

  1. Support is fundamentally a consulting business. To increase sales revenue you have to increase head count. Increasing head count is expensive, and your best employees will eventually start their own consulting business and likely compete with you. If you’re lucky, they’ll pick a niche that doesn’t overlap much with your core business.
  2. Expertise is hard to find. You may be the absolute master of the popular Open Source Frobnosticator project, and people come from all over the world to pay you for that expertise. But when it’s time to scale, see item 1 and contemplate finding or training someone to have your level of expertise with the Frobnosticator.
  3. The only advertising that works is word of mouth. You can’t easily have someone else sell your services for you. Most of the businesses (IBM, for instance) that might be well-placed to do so have their own services arm that they’d sell before they’d sell your service. Sure, IBM professional services doesn’t have any Frobnosticator experts on staff, but they’ll wing it and bill $200/hour while they figure it out. Thus, you have to pound the pavement to drum up your sales. Do not underestimate the cost of getting and keeping customers.

That said, if I can’t talk you out of building a service and support business around an Open Source project, there are a few things you can do to improve your chances and reduce the pain of the above problems.

Making the best of an Open Source service business

Pick a popular project. I mean a really popular project. Apache, MySQL, PostgreSQL, Linux for an interesting and growing niche (like mobile and embedded devices), one of the more popular web application frameworks, etc. By picking a popular project, you insure there are a lot of users. You also guarantee that you can find other experts to hire when it’s time to scale your business from a one-man consulting shop to a real business with employees.

Become an established expert on that project. Your name needs to be on the core developer list. If it isn’t, you’ll always be picking up scraps. If you can’t see a way to get yourself into this position, you don’t have a business.

Write a book about that project, or at least start a blog. The money for technical documentation isn’t very good, and you’ll spend a lot of time on it, but nothing says “expert” like “You can pick up a copy of my Frobnosticator book at Amazon, for more details on this subject”. If you’ve got a blog, people searching for help on a particular topic related to your area of expertise will find you. There is no better marketing than helping someone solve their problems.

Swell Technology: Joe’s case study OSS service business

Back in 1999, I started an Open Source service business called Swell Technology. I actually intended to build a product company, but the only aspect of the business that ever made money was service. So, it remained a service business until it closed up shop in December of 2005.

I accidentally picked a reasonably popular project, Squid, because I had used it and thought it was really cool. I accidentally became a core developer, because I submitted a few patches, helped out a lot on the mailing lists, did lots of hardcore testing, and made myself extremely useful to the other developers on the project. When one of them needed hardware, I sent it to them. When they needed to make some money, I hired them to build things for my company.

But, it failed to scale. Swell Technology was throughout its life a small business. It bought me a 350Z, and kept me in food and houses all those years. But, I took only two vacations during that time. I also managed to avoid doing a lot of things that I enjoyed, because I was always too busy with the company and its customers.

Hardware Bundling (or, “It’s an appliance!”)

This one is frightfully common. The basic premise is, “Open Source software is hard to install, hard to optimize, and hard to use, so I’ll put it on a box, and sell the box!” This is a horrible idea, and here’s why:

  1. You can’t afford to be in the hardware business.
  2. If you’re not already in the hardware business on a large scale, you really can’t afford to be in the hardware business. Dell does not leave room for companies selling a billion or more dollars worth of hardware each year (see Compaq)…how do you think you’re going to hold up doing $1mil, or even $10mil, a year in sales?
  3. Hardware is really a service business. When you sell an all-in-one solution and charge a premium for it, customers want 24 hour, top of the line, support. If you’re a one or two man shop, that means you’re answering the phone at 3 AM on a regular basis.
  4. You can’t charge enough. Your customers will talk specs, and the moment they do, they’re going to realize you’re selling equipment they can buy from Dell for a third the price. You’ll actually find that customers want to buy Dell hardware and pay you an hourly rate to install your products on it. The moment you accept such a deal, you have a pure service business, limited to the number of hours you can summon each day.
  5. Being in the hardware business means that you have one more thing to distract you from your core competency. You develop software, right? What are you doing futzing around with hard disks and RAM and processors all day? Every 12 months your hardware platform changes, by necessity. Do not underestimate the time expense of managing a hardware-based business.

Hardware bundling is the worst possible solution to the problem of “how do I make money with Open Source software”, but it crosses every OSS entrepreneurs mind at least once. I can’t think of a single pure-play OSS “appliance” company that has been an enduring success. Not one. The success stories I can think of have ended up selling proprietary products along-side their Open Source core (Barracuda), or pretending there is no Open Source core (InfoBlox). The really lucky ones sold out to a bigger vendor before things had a chance to go south (Cobalt). The ones that lasted a few years had time to burn through their capital and find themselves on the wrong side of the profit line. (Swell Technology, my first business, was one of those that lasted long enough to lose money on hardware sales…the hardware business was subsidized by service pretty much the whole time it was in operation.)

Plugins (or, “the coffee table is free, but the doilies are going to cost you”)

This one actually crosses the line from pure play OSS business model to a hybrid model, since the plugins aren’t Open Source. Not coincidentally, this is the point at which the business model begins to make sense as a scalable business where your core competency is what you do all day: write software. There are actually a few examples of this that work. MySQL AB doesn’t charge for the core database, but they charge for various extensions and management tools that work with the database. Because databases are a huge market and MySQL is the most popular relational database in the world, in number of installations, they can build a nice company, with employees and everything.

Of course, this is also the point at which Free Software zealots begin to squirm. It’s generally legal, as long as you comply with the relevant licenses, or hold the copyright on the software, so you aren’t restricted by the license. The majority of folks aren’t going to begrudge you making a living. But, there will be some pushback if you’re taking a historically Open Source project, and building proprietary products on top of, or alongside, it. Depending on the community, this pushback could be dramatic and ugly or peaceful and friendly.

Mambo is a fine example of handling this approach poorly. Very, very, poorly. I don’t know the whole story, but the end result was a loud and angry fork and exodus of developers to another project. So, here’s a few tips for how to avoid the OSS death penalty (a fork which takes away all of your best developers):

  1. If you aren’t a core developer of the project in question, stop here. You need to become more important to the project. You don’t have the clout to pull off a commercial venture based on this project. There is at least one shortcut to this clout: Hire one or more of the core developers, after making sure they’re on board with where you’re going. With Virtualmin, Inc. we have the original author of Webmin, Usermin and Virtualmin, as well as the second best known Webmin guy whose previous company actually funded the original Virtualmin development (that’d be me and Swell Technology). Even so, we got a bit of pushback on building proprietary products.
  2. Don’t screw with the license. If you don’t hold the entire copyright, you must abide by the license, religiously. Respect the license, and respect the developers, and the users will generally keep quiet. With Virtualmin, we actually hold all of the copyrights, but we are, nonetheless, respectful of the license of Webmin. Everything good we do in Webmin for Virtualmin, Inc. purposes, gets rolled out to the Open Source Webmin immediately.
  3. Make the core project better through your involvement. Be the best friend the users of the OSS project have. Be involved on the mailing lists or forums, make large contributions in the form of code or money, and make great things happen with the core project. Don’t be pushy about it, and make sure the big ideas you have aren’t at cross-purposes with the other developers on the project. Since we’ve started Virtualmin, Inc. we’ve rolled out a huge swath of usability and UI enhancements to Webmin, as well as several new plugins for Virtualmin GPL. Even though the Open Source users don’t get everything we build, they get more than they would have gotten had Virtualmin, Inc. not existed.

Freemium (or, “the grass is greener on the other side, but your wallet is a bit slimmer”)

This one is sort of an extension of the shareware model of yore, and it’s roughly the model we’ve chosen for Virtualmin, Inc. In this model, you’re distributing the majority of your software under an Open Source license, but for a few premium features, you charge money and protect them with a different license. So, you continue to enjoy the popularity that free software brings, while still knowing that there is a pretty good chance lots of people will pay for the extra features. Our Open Source core has millions of users, so we have a reasonably large pool of potential customers, though the percentage of converts is still quite low (Webmin is downloaded 2 million times per year, and we have 700 paying customers for Virtualmin), but increasing at a steady rate.

This is, perhaps, the riskiest of all Open Source business models, from the perspective of keeping your Open Source users happy. We have effectively forked our own software into two versions, and because the Open Source version of Virtualmin is under the GPL, it attracts the most vocal of OSS fanatics to its defense. We hold all of the copyrights to Virtualmin, and there have been no other significant contributors to the codebase besides me and Jamie, and so the gnashing of teeth didn’t last too long. But, we still rigorously follow the advice given in the section above for keeping our OSS users happy. We give away more than we hold for paying customers. This is probably a mistake in the short term, but we suspect it will pay off in the end, when the de facto solution to a wide array of systems management problems is based on our software.

Dual Licensing (or, “you don’t have to admit you’re using Free Software”)

This is, perhaps, the oldest Free Software business model, and thus one that has stood the test of time. The basic premise with this model is that there are lots of businesses out there that would like to make use of your code in their software, but they don’t want to release their own source code under a Free Software license. So, they buy a non-viral alternative license. SleepyCat, makers of the BerkeleyDB, ran on this model for many years. Likewise for the makers of the Ghostscript Postscript library.

We even tinkered with this model for a while with Virtualmin. Very early versions of Virtualmin exist in two other commercial products, licensed under traditional copyright terms, for which we made a few thousand dollars each. But, this model works best for libraries, because the software really needs to be invisible to the end users (otherwise, users figure out that it’s just some Free Software re-branded and complain about it). This is another case where your market must be huge for it to make sense. Databases and typography are both pretty big and are used behind the scenes in thousands of products.

Note that this model only applies to software that has a viral license, like the GPL. BSD-style licenses are already liberal enough to permit re-branding and distribution without source or without any particular restrictions. So, while our underlying software, Webmin, is used in hundreds of products, we don’t see any licensing revenue from this usage, because it is under a BSD license.

Hosted Applications (or, “Web 2.0!”)

This one is a beauty. You don’t have to worry about licenses, much. Your end users never need to know you’re using FOSS, though you can still get lots of goodwill by tossing out crumbs now and then. The vast majority of Web 2.0 companies (Facebook, MySpace, Digg, 37signals, etc.) fall into this category to one degree or another, as do the big search (Google) and internet media companies (Yahoo). In these cases FOSS is your lever, and some big problem is the world you want to move. It’s kind of like TV, in that you can offer premium services like HBO, or advertiser supported services like the big three networks.

I won’t go into too much detail about business models for web applications, as it is pretty well-covered ground, and I’ve never actually built a pure play web application business, so my knowledge is all anecdotal. But, there’s a lot to be said for hosted applications.

  1. You don’t have to worry about platform compatibility. At least 25% of my development time goes into making Virtualmin Professional run easily on several Operating Systems, versions and architectures. It’s a serious drain on productivity, and yet it’s a necessary expense.
  2. Bugfixes can roll out immediately. At Virtualmin, we manage update repositories for our software, so it’s easy to update to the latest versions…but we still find customers running versions over a year old. Having many versions in the field is another big drain on productivity, because you’re answering questions about bugs that have long since been fixed (sometimes I don’t even remember that it’s a bug that’s been fixed, so I end up helping a user troubleshoot the problem from scratch again, only to find at the end of it all that all they had to do was update to the latest version).
  3. Licensing is a hard problem. Selling software and protecting it from illicit use requires walking on eggshells. Your honest users (luckily most of them, in our field) will be offended if the license management gets in their way. Users who don’t want to pay will do nasty things to avoid it. One of our serious headaches is chargebacks due to fraudulent credit card purchases. Of course, if your hosted app business model involves credit card billing, you have this problem, too…but since the software isn’t installed elsewhere you can lock the account after the chargeback comes through and the user loses their data and the service. Our stick is much smaller: We shut down updates, and make the red license violation box show up on the front page of the software whenever they use it. We could be more harsh, but then we’d risk treating honest customers poorly in the event of problems contacting the licensing server, which we won’t do.

So, those are the obvious business models for Free and Open Source software. A couple of them are known to fail. A couple are known to be difficult and require a lot of luck and a lot of popularity. And a couple are proving themselves effective in a large number of cases. Which one is right for you and your favorite Open Source project, if any, is up to you to figure out.

See also

Getting Real A book by the founders of 37signals about starting a web application business on the cheap.

Paul Graham’s essays. Virtualmin, Inc. accepted funding from Paul’s company Y Combinator in 2007, and many of his essays inspired the business we’ve built over the past couple of years. We’re big fans.

Webmin Logo Contest!

Your mission, should you choose to accept it, is to bring Webmin’s branding into the modern Web 2.0 era, while still representing the respectability that IT guys demand of their tools. In return for your trouble, you’ll win some cool prizes, including $500 cash and a Virtualmin Professional Unlimited license. You’ll also have the satisfaction of knowing that your work is being seen by millions of people every day for years to come.

Make a new logo for Webmin, and you could:

  • Impress the opposite sex!
  • Win the admiration and respect of your peers!
  • Win $500, a T-shirt with the logo you design, plus other fantastic prizes!

OK, so maybe only the final one is guaranteed to be true.

The Fine Print

OK, so it’s not fine print. But these are the rules for submission:

  • Original work only. No composites, borrowed clip art, etc. Webmin is legally clean and will remain that way.
  • Submissions must be submitted in SVG vector format. If you’re feeling adventurous, make a favicon.ico, as well. Entries in anything other than SVG will not be accepted.
  • Keep it simple enough for a T-shirt, coffee mug, or sticker. Fewer colors is better, because more colors costs more to print, and usually looks terrible. If it looks good in white on black and black on white, you get bonus points.
  • What colors you choose will, to some degree, dictate future themes for Webmin: choose wisely.
  • You may submit as many logo designs as you like.
  • You may, or may not, derive your logo ideas from the existing Webmin spider web logo. Go with your instincts.
  • We will solicit feedback from the Webmin community, but we’re the sole arbiters of the final winner.

Judges

The winner will be determined by the following judges:

Jamie Cameron – Creator of Webmin and primary developer, founder Virtualmin, Inc.
Kevin Hale – Renowned designer, Particle Tree blogger, Treehouse editor and writer, founder Wufoo.com.
Joe Cooper – Webmin developer, founder Virtualmin, Inc. (And brainiac who came up with the idea for this contest.)

So break out your Illustrator or your Inkscape, and get started! Webmin’s tenth birthday only happens once, and Webmin has only ever had two logos (by some definition of “logo”, since Jamie designed the first one).

Visit this contest at SitePoint to submit your entries, and to see the competition, so far.