Homebrew Package Installation for Servers

Homebrew logo

One of our customers came to us a couple of weeks ago wanting to run Virtualmin on Mac OS X. I, foolishly, said, “Sure! I can help with that. I installed Virtualmin on OS X several years ago, how much different can it be? It shouldn’t take more than a couple of hours.” It turns out, it can be remarkably different. In the intervening years, Mac OS X has evolved in many interesting directions, mostly positive, some questionable. Mac OS X remains exceedingly weak for server usage, for reasons well out of the scope of this short article. But, it is quite strong for desktop/laptop use, and many people want to be able to develop their web applications on Mac OS X even if they will be deployed to Linux servers.

Enter Homebrew

The last time I setup Virtualmin on Mac OS X systems, the best package management tool, and the best way to install a lot of Open Source software, was Fink (Mac Ports, then Darwin Ports, was still quite new at the time).

Fink is an apt repository of dpkg packages built for Mac OS X. I love apt and dpkg (almost as much as yum and rpm), and Webmin/Virtualmin have great support for apt and dpkg, and so I was all set to choose Fink for this new deployment. But, there are some issues with Fink. First, it installs everything with its own packages, including stuff that is already available from the base system. For Virtualmin, whenever possible, I like to use the system standard packages for the services it manages. Homebrew is designed to work with what Apple provides when possible, which is somewhat more aligned with the Virtualmin philosophy.

Second, and perhaps more important, Fink is seemingly much less popular and much less actively maintained than Homebrew. I’m not sure why. Possibly because the Homebrew website looks good, and the documentation is very well-written, while the Fink home page is a little drab and looks complicated. And, Fink package versions tend to be quite a bit older than the packages provided by the very active Homebrew project. This can be a much more serious issue. Security updates are absolutely vital in a web server, and a package repository that is actively maintained is the best way to insure you’ll have security updates.

So, I’ve spent the past couple of days experimenting with Homebrew. It’s a pretty nice system, and its community and developers are active, responsive, and helpful. All great things. But, its primary advertised feature is also its biggest weakness and most dangerous mistake.

Installation Without sudo

Or: Homebrew Considered Harmful

One of the major advertised features of Homebrew is that you can install it, and any package, without root or sudo privileges. There are good reasons one might want this, but on a server, it has alarming side effects, and it is one of the first things I would need to correct for our use case (of installing a virtual hosting stack and Virtualmin). The example I’ll use here is that of MySQL. When you install the mysql package from Homebrew, it will be installed with the ownership of all files set to the user that installed the package. And, more dangerously, it will be setup to run as the user that installed it.

This decision was made because Homebrew often builds the software at install time, rather than providing a binary package (there is a new “bottles” feature installs binary packages, but that wasn’t intended to address the sudo problem). The risk of building software with sudo or root privileges is very real, and in this case it results in the choice to build the software as a non-root user.

Other package managers, like dpkg and rpm, resolve this problem with toolchains designed around building the packages within a chroot so that unwanted behavior is contained. For example, mock on Fedora and CentOS provides an easy to use tool for building packages across many distributions and versions inside of a chroot environment with only the dependencies specified by the package. The most popular Linux distributions distribute binary packages that were built in a controlled environment. But, Homebrew generally builds the software at install time, with no chroot to protect the system from broken or hostile build processes. And, so they insist you run it as a non-root user. This is, I suppose, a logical conclusion to come to, based on the premise of a package manager that builds software on the user’s system without being confined in a container or chroot, but it has negative consequences.

For example, when I install MySQL from Homebrew, everything is owned by joe:staff. The provided property list file for starting the server is also designed to start it as that user, when the user logs in. For a development system, this may not be a big deal, and even makes a certain sort of sense (I prefer my development environment to more closely mirror my deployment environment, but I can see reasonable arguments for the way they do it). But, for a server, it is simply untenable.

The most important reason it is a bad decision is that it leads to many, possibly all, of the services running with the privileges of the user that installed them. Which, in most cases, is probably a powerful user (mine is an administrative user with sudo privileges, for example). So, in the event any of the services are compromised, all of the services will be compromised, and likely so will the user account in question. The security implications of this really cannot be overstated. This is a huge problem.

This is why Linux and UNIX systems (and even Apple, who aren’t historically renowned for their strong multi-user security practices) run all services as different users, and with restricted privileges. On the average LAMP system, there will be an apache or www user that runs Apache, a mysql user that runs MySQL, a nobody user that runs Postfix, and web applications will usually be run as yet another user still. These special users often have very restricted accounts, and may not even have a shell associated with them, further limiting the damage that can occur in the event of an exploit of any one service. Likewise, they may be further restricted by SELinux or other RBACL-based security. Any one of these services or applications being compromised through any means won’t generally compromise other services or users. Homebrew throws that huge security benefit away to avoid having to sudo during installation.

It’s probably too late to convince the Homebrew folks to backtrack on that decision. But, it’s not terribly difficult to fix for one-off installations, and many do consider it a valuable feature of Homebrew. Fixing the installed services as I’ve done has some side effects that may also be dangerous, which I’ll go into at the end of the article, but since I figured out how to do it, I thought I’d document it. During my research I found that an alarming number of users are using Homebrew in server environments and I found a number of users asking similar questions about various services, so, maybe this will help some folks avoid a dangerous situation.

So, let’s get started. After installation of MySQL (using the command brew install mysql), here’s the changes you’ll want to make.

Update The Property Lists File

The way Homebrew recommends running MySQL after installation is to link the provided plist file in /usr/local/opt/mysql/homebrew.mxcl.mysql.plist into your ~/Library/LaunchAgents directory, and add it using the launchctl load command. This sets it up to run at all times when your user is logged in, which is great if you’re developing and only need it running when you’re logged in and working. But, we want it to run during system boot without having any users logged in, and even more importantly we want it to run as the _mysql user.

So, instead of linking it into your local LaunchAgents directory, as the documentation suggests, copy it into your system /Library/LaunchDaemons directory.

$ sudo cp /usr/local/opt/mysql/homebrew.mxcl.mysql.plist /Library/LaunchDaemons

Then edit the file to add user and group information (you’ll have to use sudo), add a –user option, and change the command to mysqld and WorkingDirectory to /usr/local/var/mysql. Mine looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

Notice the addition of the UserName and GroupName keys, both set to _mysql, as well as several other altered lines.

Note: I am not a Mac OS X or launchd expert. There are a number of aspects of the Mac OS X privilege model that I do not understand. I would welcome comments about how the security of this configuration might be improved. Also, the entirety of my experience with launchd is the several hours I spent playing with it and reading about it to convince MySQL to run as a different user. But, I’m pretty much certain that the way Homebrew does servers is worse than what I’ve done here.

Change Ownership of MySQL Installation and Databases

The _mysql user does not have permissions to read things owned by the joe user (or the user you used to install MySQL with Homebrew), so you’ll need to change ownership of all MySQL data files to _mysql:wheel.

$ sudo chown -R _mysql:wheel /usr/local/var/mysql/

Change Ownership of the Property List File

Property list files in the Library/LaunchDaemons directory (or the LaunchAgents directory) must belong to root, for security reasons. So, you’ll need to update that, as well.

$ sudo chown root:wheel /Library/LaunchDaemons/homebrew.mxcl.mysql.plist

Load and Start the MySQL Daemon

The launchctl command manages agents and daemons, and we can add our new service by loading the property list:

$ sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.mysql.plist

And, then we can start MySQL:

$ sudo launchctl start homebrew.mxcl.mysql.plist

If anything goes wrong, check the system.log:

$ sudo tail -f /var/log/system.log
I found the documentation at launchd.info particularly helpful when working out how to use launchd.

Future Concerns

Since this corrects the big security issue with the Homebrew installation of MySQL, and this technique could reasonably easily be applied across every Homebrew-installed service, why aren’t we happy?

Updates are part of security, too.

The most important reason to use a package manager is not to make it easy to install software, contrary to popular belief, and if all the package manager does is provide easy to install packages, it is not an effective package manager. The most important reason to use a package manager is to make it easy to update software.

Homebrew makes installation and upgrades reasonably easy, but the steps I’ve taken in this article to make MySQL run as its own user seems likely to break updates, since some files created during installation have changed ownership. A newer version of MySQL isn’t available in the Homebrew repository, so I can’t test whether it does break upgrades or not. Nonetheless, fixing this issue compounded across many services (the Virtualmin installation process normally installs: Apache, MySQL/MariaDB, PostgreSQL, Postfix, Dovecot, BIND, Procmail, SpamAssassin, ClamAV, Mailman, and a bunch more) will likely prove to be a maintenance challenge that is probably not worth the effort.

So, despite having figured out how to make this work, I’m now going to spend the same amount of time and effort giving Mac Ports a thorough test drive. I have a pretty strong suspicion it will be a better fit for server usage. This way of working feels like fighting against the way Homebrew wants to operate, and when you find yourself having to work so hard against the tool, it’s probably the wrong one for the job.

And What Does All This Mean for Virtualmin on Mac OS X?

Well, I’m a sucker for the sunk cost fallacy, so I’m planning to spend another couple of days working out a basic install script for Virtualmin on Mac OS X, probably using Mac Ports (I can see a way forward using Homebrew, but I don’t like the terrain). I’ll likely never recommend Mac OS X for a production server deployment, but it’s certainly not the worst OS out there for the purpose.

DNS for Web Hosting (Or: What the heck is a glue record?)

Has there ever been a service that is more fraught with confusion, misconfiguration, and hair-pulling all-nighters than DNS? Maybe, but that was a rhetorical question. DNS confuses a lot of new system administrators (and, if you find yourself thinking about DNS, it means you’ve become a system administrator…but, don’t worry, I’m here to help you through your transition from normal person to keeper of UNIX wisdom).

In this article, I’m going to show you how to do some interesting things with DNS, which are all the steps you need to take for springing a brand new domain name into existence so you can put your awesome website on it! Here’s what this article will teach you how to do:

  • Search for and register a new domain at Namecheap.com (steps will be very similar at any registrar, but Namecheap is quite good and low-cost)
  • Configure initial records using the registrars name servers
  • Setup a new name server using BIND
  • Setup a new zone with address and name server (NS) records
  • Setup glue records at the registrar
  • Test your new DNS zone using command line tools

What I won’t cover in this article is setting up BIND using Webmin. I’ve written about that on a few occasions over the years, and the most recent writings are in the Webmin wiki in the BIND DNS Server module documentation , as well as the BIND tutorials section. Jamie also wrote an introduction to the Domain Name System. This post is somewhat more geared toward beginners than most of those, and also covers things that aren’t really within the scope of Webmin, like registering a name and setting up glue records at the registrar. But, once you’ve wrapped your head around the basics of DNS, you may find that Webmin (and the related documentation) can make your job administering DNS servers a little easier and faster, particularly if you have a lot of zones to manage.

Search and Register a Domain Name

While picking the perfect domain name is out of the scope of this article, there are many domain-picker tools on the web to help you out. I tend to prefer to just brainstorm for an hour or two on the theme of my business or project, and come up with a half dozen or more good enough names.

Names should be as short as you can find (but don’t worry about getting a <5 letter domain, because a lot of those are already taken, even the nonsense words). Names should probably not use special characters, though the dash, “-“, is legal in domain names and is sometimes used, even for business sites, but it makes your domain harder to type on mobile devices. Making it difficult to type on a mobile device is a big negative, because more people today browse the web on mobile devices than on traditional computers. Numbers should also be avoided if possible for the same reason. Unicode characters can be used, but you shouldn’t use them unless you are serving a market that regularly uses Unicode domain names, such as China. Setting up Unicode domain names using the International Domain Name (IDN) standard won’t be covered here, but I may address it in a future post.

Once you’ve settled on some possible names or used a domain finding tool to find the name you want and that you know is available, create an account at Namecheap, or your preferred registrar. Once logged in, click on Domains->Registration.

Namecheap registration form

Enter the first part of the domain name you want. I want linuxfortheweb.com, because I’m working on a new book about system administration for web servers and Linux For The Web is a pretty snappy working title. You can leave off the .com, and Namecheap will list all of the popular Top Level Domains (like .com, .org, .net, .io, .biz, .co, etc.), and you can see whether they’re available. In my case, linuxfortheweb.com was available, so I grabbed it!

If the name you want isn’t available in your preferred TLD, don’t despair. Other TLDs are quite commonly used these days, and users aren’t terribly confused by an odd TLD. It’s true that a .com is preferred for businesses and such, and if it’s important to you to have a .com, you’ll need to try some of the other names you brainstormed earlier, or find one you can buy from its current owner.

Once you select the domain you want, you can go through the rest of the checkout process. Make sure all of your contact information is the information you want to have on your public whois record (or choose to use the privacy features provided by most registrars including Namecheap). The whois record is normally public and visible to anyone on the Internet; we will talk about how to query whois records using command line tools later in this article. Most registrars will act as a proxy on your behalf and provide their contact information instead of yours, and they will forward email to you when they receive any intended for the owner of the domain. This is usually a low cost service, but I rarely bother with it, as my information is all over the Internet already.

After completing the registration process, and paying for the domain, you can configure some initial records. You have a couple of options when setting up DNS: Many registrars will host your DNS zones for you, which is simple and requires very little back end knowledge; or, you can delegate the zone to your hosting provider DNS servers; or, and this is my eventual goal in this article, you can host them on your own DNS server running BIND. We’ll talk about the advantages and disadvantages of each approach as we go along.

Creating Host Records At The Registrar

The simplest way to get a domain name functioning is often to let your registrar handle DNS. This can be sub-optimal for a number of reasons, but it’ll get you started without having to install and configure your own DNS server. I will talk more about why this might be limiting, and I will show you how to setup your own name server later in this article.

At Namecheap, it is not immediately obvious where to go, as the label for the option may be confusing if you don’t know the terminology. To edit DNS records at Namecheap, click on the domain in the list of domains. This will take you to the dashboard for that domain name.

Look in the left menu for the item labeled All Host Records and click it.

All Host Records image

Clicking All Host Records at Namecheap

This will take you to a simple UI for creating new host records in your zone. To start with, it will only have a couple of records, plus the default mail forwarding option selected. I’ve edited my zone to look like the following, and I’ll explain below what each field and value means.

Domain records configuration at Namecheap

Domain records at Namecheap.com

These two items, @ and www, are default records that are pre-configured to point to the Namecheap parking service page. We want to change this immediately, if not sooner, to something we own and get value from, even if it’s just our own landing page that says “Coming Soon!”.

The @ symbol simply means, “the domain name itself”, in this case linuxfortheweb.com. I have filled in the IP of my web server in the IP Address/URL field. Setting up a web server to listen on this address won’t be covered here, but a later article will walk you through how to do it, and there’s a lot of resources on the web to help as well (or you could install the Open Source Virtualmin control panel and it would do most of the work for you).

The Record Type field indicates the type of record this will be. The DNS standard has defined a number of record types to help clients and other DNS servers find specific information about a zone, such as addresses, mail server names, name server names, etc. I will talk more about some of the types of records in a moment, but for now we only care about A records, or Address records.

An Address Record is a record type that maps a name, like “www.linuxfortheweb.com”, to an address, like

You may also see an automatically-generated SPF record that is created by Namecheap when you choose to use their mail forwarding service. We won’t be keeping this for long, but it doesn’t hurt anything to leave it alone.

For now, you can save your new records.

Now, we’d like to be able to test this, but it can take as long as a couple of hours for name registrars DNS servers to update, so we probably can’t actually test immediately. But, if you take a coffee/tea break, or have a walk around the block, you might be able to come back and see it working. Just for fun, here’s the basic testing tools for DNS:

On Linux systems, you would use the host command (which is provided by the bind-utils package on CentOS):

[joe@alice ~]$ host linuxfortheweb.com
linuxfortheweb.com has address
linuxfortheweb.com mail is handled by 20 eforward5.registrar-servers.com.
linuxfortheweb.com mail is handled by 10 eforward1.registrar-servers.com.

The host command automatically includes MX records, by default. We can ignore that information for the time being. What’s important is that we’re getting back the address I put into the A record for @ earlier.

On a Windows system, you would use the nslookup command:

C:\>nslookup linuxfortheweb.com
Default server: dns1.registrar-servers.com

Cool, it looks like everything is working as expected. If you really wanted to keep things simple, you could stop here. This is already providing basic name service for your domain, and it will point users to your address when they browse to your domain name. That was pretty easy, huh? But, most people with serious aspirations to build awesome things on the web will want to have more complete control of their DNS information. Or, if you’re somewhere in the middle, and just want to add a mail server, so you can send and receive mail, the Namecheap interface makes that pretty easy. But, for this article, I’d like to skip that and move right on to setting up our own name servers.

Delegation Is Delectable!

Since you continued reading, I’ll assume you want all the power and responsibility that running your own DNS server provides. So, we have to do something pretty tricky: Tell the world how to find a addresses in our domain.

Does that seem simple? If so, that probably means I haven’t explained enough about how DNS works.

In its simplest form, DNS is provided by a hierarchy of name servers on the Internet, with the root name servers at the top of the hierarchy. The root name servers delegate requests for zones (a zone is a collection of records under one domain name) to the authoritative name server for that zone. The root name server information is distributed as a list (technically, modern systems have a hints file, which just tells the DNS resolver client how to find the full list of root name servers and the zones they are authoritative for). The root name servers very rarely change IP addresses, and are known to clients because the lists are distributed with operating systems. You can see what the hint file looks like at the IANA website. You don’t need to understand the contents of this file yet.

A name lookup request begins with the DNS resolver on your computer or device breaking up the name into its components, beginning with the top-level domain. It will then request the authoritative server for the next level of the domain from the root name server for the top-level domain, and each level of the name will be queried until a server that is authoritative for that name is found. In my case, with the domain linuxfortheweb.com, the top-level domain is com, and so a client would ask the servers for the com domain about the name linuxfortheweb within the com zone. The root name server for the com zone would then delegate the request to the name servers that are authoritative for my name, which based on the steps we took above are currently the name servers at Namecheap. How does it know what servers to delegate that request to? This the job of the domain registrar. They provide the delegation information to the root name servers.

At this point, the request path is pretty simple. The client checks the root name server, the request is delegated to the Namecheap servers, the client asks one of the registrar servers (and trying a different one if the first doesn’t respond in a timely fashion), and the registrar server replies by sending back the requested record (since we’re trying to keep this simple, we’re talking about an address record mapping the name linuxfortheweb.com to the IP address

Now we want to delegate that one step further, and have the client resolver contact our server. For that we need the so-called glue record to point to our name servers. “But, but, but I don’t have any name servers!” I hear you cry. And, to that I say, “Not yet. But, you will.”

Go Forth And Install BIND

Now, you’re gonna get serious about being system administrators by setting up your own name server. You’ll need a server, or, rather, two servers (though you can fake it with just one as long as you have two IP addresses for it). Nearly any Linux or BSD server will do, as BIND is not a very resource intensive service, as long as it is only answering requests for your own zones. You could use a virtual machine or a physical server. As long as it has a reliable always-on connection to the Internet and a fixed IP address, you can use it as a name server. I have three virtual machines specifically for DNS service, and two are configured to automatically be slave servers to the first one. I will show you how to do that in a later article.

Start by installing the BIND package for your OS.

On CentOS/RHEL or other yum-based operating systems:

# yum install bind

Or, on Debian/Ubuntu:

# apt-get install bind9

Or, on FreeBSD (assuming you are using pkgng, the new binary package manager):

# pkg install bind9

Or, on Solaris (assuming you have setup to use OpenCSW):

# pkgutil -i bind9

Now, edit the named.conf using your favorite text editor (I like vim!). This file may be in a variety of places, depending on your OS; on CentOS it is in /etc/named.conf. There will be a lot of stuff in the configuration file that is beyond the scope of this article, but at the bottom add the following (modified to suit your domain and other name server IP addresses, of course):

zone "linuxfortheweb.com" {
type master;
file "/var/named/linuxfortheweb.com.hosts";
also-notify {;
allow-transfer {;
notify yes;
Be careful to adjust the file paths to suit your OS. My example is on a CentOS 7 system. I believe Debian and Ubuntu will also put the BIND hosts files into /var/named, but it’s also possible to run BIND in a chroot, and some operating systems may use different standard locations. It’s nice to do things in the way that is common for your OS so that other administrators will know where to look for things.

This snippet of code instantiates a new zone called linuxfortheweb.com.

It is a master zone, which means it will be the source from which any other slaves configured later get their information.

The file directive instructs BIND where to find the host records file for this zone.

The also-notify directive indicates that this server will notify another DNS server at the provided IP address of changes to the zone.

The allow-transfer directive indicates that this server should allow the server at the provided IP address to request a transfer of this zone. The combination of this option and the previous are necessary steps to permit another DNS server to act as a slave for this zone.

The notify directive instructs BIND to notify the servers indicated in the also-notify section when changes to the zone have been made.

Next, edit the zone file (/var/named/linuxfortheweb.com.hosts in my example), and insert the following:

$ttl 38400
linuxfortheweb.com. IN SOA ns0.linuxfortheweb.com. joe.linuxfortheweb.com. (
38400 )

This is the first part of the zone file, sort of a preamble. It includes information that applies to the zone itself. Line-by-line, this snippet contains the following:

$ttl- The default time-to-live for this zone. If not specified in an individual record, this will be the time-to-live for that record. Other name servers may cache records from this zone for this length of time (in seconds in this case), or as long as the individual record TTL, whichever is longer.

SOA- The SOA, or Start Of Authority, recourse record is actually a special record type (like A records we already discussed some). Because it starts with linuxfortheweb.com., this SOA record applies to the linuxfortheweb.com zone. If other subdomains within this zone were to be delegated to other servers, or other zone files on this system, they would have their own SOA. The IN simply stands for Internet and can generally be ignored, as the vast majority of records you deal with (maybe ever) will be IN records.

The next two fields within the SOA record are the primary name server (generally the master name server), and the email address of the administrator of the zone, with the @ symbol converted to a . (period), since @ has special significance in a zone file.

Then the series of numbers are, in the order in which they appear:

  1. Serial number – A unique identifying number for this zone. It must always increment (get higher) every time the zone is updated. A popular convention is to use a year:month:day:number of modifications that day format.
  2. Time-to-refresh – How often a secondary, or slave server checks its zone database files against a master server.
  3. Time-to-retry – How long a secondary server waits before trying to contact a master server, after a failed transfer.
  4. Time-to-expire – When the secondary server will discard all zone data, if the master cannot be reached.
  5. Minimum-TTL – In BIND 9 the minimum time is used to define how long a negative answer is cached. Old BIND versions used this value differently, but you shouldn’t be using a BIND version that old.

These can generally simply be ignored, as long as they are something reasonably sane, and serial number always increases. But, I wanted you to know what they mean, so they aren’t intimidating. They are all necessary, even if we don’t have to think hard about them.

The next part of the zone hosts file is:

@                        IN A
www.linuxfortheweb.com.  IN A
mail.linuxfortheweb.com. IN A
@                        IN MX 5 mail.linuxfortheweb.com.
ns1.linuxfortheweb.com.  IN A
ns2.linuxfortheweb.com.  IN A
@                        IN NS   ns0.linuxfortheweb.com.
@                        IN NS   ns1.linuxfortheweb.com.

These are the various mandatory records for this zone. This is pretty much the minimum set of records you’ll need for a full-featured web hosting server and for a domain that you accept email for.

Breaking it down, the A records are, as previously mentioned, Address records. They indicate the IP address of the hostname in question (remember that @ is merely shorthand for the zone name itself, or linuxfortheweb.com. in this case).

The MX record indicates that mail for @ (linuxfortheweb.com.) will be accepted by a host named mail.linuxfortheweb.com.(which coincidentally shares an IP with the web server, but doesn’t need to). The number 5 is a priority. If we had multiple MX records, we could indicate the order in which they will be attempted. The lower the number, the higher its priority; i.e. an MX record with a priority of 10 would act as a backup to the one with a 5, and would only be contacted if the priority 5 server does not respond.

The final type of record in this snippet is the NS, or name server, record. This is used to notify clients of which servers to contact to resolve name within this zone (@), but it could also be used to delegate subdomains into other zones served by other name servers. This information should match the glue records at your registrar, which we will cover in more detail in the next section.

One other useful thing to know about BIND hosts files: A name without a . (period) at the end will have the zone name appended to it. So, mail would become mail.linuxfortheweb.com. There is no real reason to prefer one form or the other, but I personally like to use the full name. Whatever you do, you should do it consistently, so it is easier to read and easier for future administrators (including yourself!) to make sense of it quickly.

There are several other types of record, and several other components that could be included in this file, but I will save them for a later article. The purpose of this article is to get your name server up and running with a minimum configuration for web hosting. And, just the lines in the two snippets I’ve provided above (adjusted appropriately for your domain and your IP address(es)) will do the job.

You can, at this point, save your work, and start up your name server.

On CentOS 7 and new versions of Debian and Ubuntu (whenever they begin shipping systemd), you can do that with:

# systemctl start named

On CentOS 6 and below, you would use:

# service named start

On current and older Debian/Ubuntu versions:

# /etc/init.d/bind9 restart

On FreeBSD:

# /usr/local/etc/rc.d/named.sh start

On Solaris:

# srvadm enable bind

A Little More Testing

Let’s return to our basic testing tools, and see if our new DNS server is actually answering queries for this zone. We’ll use one extra option to the host command to force it to query the local server rather than going out and checking with the root name server and the registrar (since we haven’t told them about our new server yet).

$ host linuxfortheweb.com localhost
Using domain server:
Name: localhost

linuxfortheweb.com has address
linuxfortheweb.com mail is handled by 5 mail.linuxfortheweb.com.

Excellent! That’s exactly what we want to see. Only one more step to bring our first name server online. (And a few more steps to setup a slave DNS server, but that will be for another day and another post.)

This Is Where The Glue Records Come In

OK, now that you’ve configured a name server, you can tell the world how to use it by configuring the glue records at your registrar. I’ll also finally explain why I created those two extra A records for ns0.linuxfortheweb.com and ns1.linuxfortheweb.com very early in this article.

First, what is a glue record? It is a record that allows your name servers for your domain to be within the zone itself, because it is a record stored at your domain registry and contains the IP addresses of the name servers. For example, if I wanted my domain names within linuxfortheweb.com to be hosted on ns0.linuxfortheweb.com and ns1.linuxfortheweb.com I would need glue records at Namecheap to tell clients what the IP addresses were for those names. Otherwise, there would be a loop when the client wants to look up the name, that would go something like this:

Client to root name servers: What’s the name server for the linuxfortheweb.com zone?

Root Name Server: ns1.linuxfortheweb.com

Client to root name servers: That’s a name in the zone I was asking about! Now what?

Glue records allow the registrar to provide the IP information to the client, breaking this loop and allowing the client to know how to contact the name servers for this domain.

Some registrars will allow you to simply create these records by telling them what the IP address for each of your name servers is. But, some will only allow you to enter a name…so, you’re in the tricky situation of needing DNS service before you can set up DNS service! Luckily, Namecheap allows you to enter names and their IP addresses when creating glue records, and then allows you to assign those as the name servers authoritative for your domain. This makes the process pretty painless, if still a little confusing for newbies.

Creating Glue Records At Namecheap


In the Namecheap dashboard, click on the domain name to open the dashboard for that domain, and click the menu item labeled Nameserver Registration in the left menu.

This will open a simple form, where you can associated name server names within your domain to IP addresses.

Namecheap glue record editing form

Editing glue records at namecheap

Here, you’ll fill in the form IP addresses of your name servers. You’ll need at least two name servers. You can “fake” it by having two names on the same DNS server, but if you’re resorting to that, it may be recommended to stick with letting your registrar or a third party DNS server provide your DNS service until your needs have grown to support two DNS servers.

After you’ve saved this, and given it a little time to propagate, your DNS servers should be live. Take a break for a little while and come back for the final section where we’ll go through some tools to test our new DNS server(s).

Test Time!

Check the Glue Records

First, check to be sure the world now knows about your DNS servers. You can use a variety of tools for this, but I usually just use the whois command.

$ whois linuxfortheweb.com

This will produce a long list of information, including contact details for your zone, the registrar information, and registration and expiration date. For our purposes, we just want the field labeled Name Server:.


Success! I now see that the glue records are active. You can also use the dig command with the NS option to query any of the root name servers for your TLD to find out name server information. In a later post I’ll talk more about DNS troubleshooting tools and tactics, as well as other DNS-related topics.

Check the NS Records

The host command used earlier has a number of options for easily testing a variety of aspects of DNS. To check name server records, the -t ns option can be used:

$ host -t ns linuxfortheweb.com
linuxfortheweb.com name server ns1.linuxfortheweb.com.
linuxfortheweb.com name server ns2.linuxfortheweb.com.

Once again, we have the results we were expecting. Cool!

So, let’s make sure those servers are responding as they should be:

$ host linuxfortheweb.com ns1.linuxfortheweb.com
Using domain server:
Name: ns1.linuxfortheweb.com

linuxfortheweb.com has address

Pow! Just what we wanted to see.

Alright, this has been quite a long article, but hopefully you’ll find setting up a new name server a little less intimidating. In the next installment on DNS, I’ll cover setting up slave servers, and touch on some other record types. I’ll also talk about some of the more advanced DNS management features of Webmin, including automatic slave server configuration.

Choosing a Linux Distribution for Your Web Server



Several years ago, I wrote a post about choosing a Linux distribution for a web server. It’s be so long that I don’t even remember where I posted it (so I, unfortunately can’t link to it), so it’s probably time to revisit the subject, as it does come up pretty frequently in our forums and in conversations with customers. The choice is somewhat more obvious today than it was back then, and I recall I covered at least five distributions (and I believe Solaris and FreeBSD) in that previous article. In this article, the leaders in the server operating system market are pretty clear, at least for Open Source platform web deployment, such as node.js, Ruby, Python, PHP, Perl, or Go. Because there are clear market leaders, I’m going to focus my attention on just three Linux distributions: CentOS, Debian, and Ubuntu.

I will briefly explain why there are only three distributions most people should be considering for server deployment, and I’ll also briefly mention some situations where you might want to branch out and consider other options.

So, let’s get on with it, and pick out the right Linux distribution for your new web deployment!

Lifecycle Is Really, Really, Incredibly Important

The average server remains in service for over 36 months. I have a couple of machines that have been in use for over six years without an OS upgrade! Upgrading the Operating System on a production server, even when a remote or in-place upgrade option is available, is prone to breaking existing services in unpredictable ways, or at least in ways that are difficult to predict without a very long and time-consuming audit of all of the software running on the system and how all of the pieces interact and how they will change when upgrading to newer versions.

Thus, one goal when selecting an OS for your server should be to insure you have plenty of time between mandatory upgrades. Of course, nothing stops you from upgrading earlier than you need to. If you want newer packages and have the time to perform the upgrade or to migrate to a new server before the OS reaches its end-of-life date, there is nothing stopping you. What we are more concerned about is how soon that decision will be forced on us.

With regard to lifecycle of the major Linux server distributions, CentOS (and RHEL) is, by far, the king, with a 10 year support period. Ubuntu LTS is second with a 5 year cycle. Debian is somewhat unpredictable, but always has at least a 3 year lifecycle; sometimes there may be an LTS repository that will continue support for a given version.

Non-LTS Ubuntu releases should not be considered for server usage under any circumstances, as the lifecycle of ~18 months is simply too short. Likewise, Fedora Linux should not be considered for any server deployment.

The end-of-life for current CentOS releases is as follows:

CentOS 5 Mar 31, 2017
CentOS 6 November 30, 2020
CentOS 7 June 30, 2024

For Ubuntu LTS:

Ubuntu 10.04 LTS April, 2015
Ubuntu 12.04 LTS April, 2017
Ubuntu 14.04 LTS April, 2019

For Debian:

Debian 6 (with LTS support) February, 2016
Debian 7 ~Late 2016 estimated
Lifecycle Winner
CentOS by a five year landslide. If you don’t know when you’ll have the time and inclination to upgrade your server OS or move to a new server, CentOS may be the best choice for your deployment, if the other deciding factors don’t sway you to something else. Not having to think about server upgrades until 2024 is pretty cool.

 Package Management

The reason a long lifecycle for your server operating system is is so important is that you need to be able to count on your OS to provide security updates for the useful life of your server. And, the method by which software updates, particularly security updates, are provided is vitally important. It needs to be easy, reliable, and preferably something you can automate without risk.

All of the distributions in this comparison have excellent package management tools and infrastructure. In fact, they are all so excellent that I was tempted to ignore this factor altogether. But, there are some subtle differences, particularly in the available package selection. And, if you’re considering going outside of the Big Three Linux distributions covered here, or are considering a BSD or Windows for your deployment, you should definitely consider how updates will be handled, as the picture is not nearly as pleasant on every distribution and OS, and many cannot be reliably automated.


The package manager invented for Debian and also found on Ubuntu is called apt. It is a very capable, fast, and efficient, package manager that handles dependency resolution and downloading and installing packages from both the OS-standard repositories and third-party repositories. It is easy to use, has numerous GUIs for searching and installing packages, and can be automated relatively reliably. apt installs and manages .deb packages. It is reasonably well-documented, though it has some surprising edge cases.


Yum, aka Yellow Dog Updater Modified, was initially developed for the Yellow Dog Linux distribution as the Yellow Dog Updater (a special build of Red Hat/Fedora for Macintosh hardware), and then forked and enhanced by Seth Vidal. yum installs and manages RPM packages, and is found on CentOS, Fedora, RHEL, and several other RPM-based distributions. There are both command line and GUI utilities for working with yum, and it is well-documented.

Which is better?

Choosing between package managers is difficult, as both mostly have the same basic capabilities, and both are reasonably reliable. They both have been in use for many years, and have received significant development attention, so they are quite stable. I believe you could easily find fans of both package managers, and I wouldn’t really want to argue too strongly either way.

I’ve worked extensively with both, and the only time I had a preference was when I was creating my own repositories of packages and when I needed to customize the package manager, and in both cases yum was much more hacker-friendly. Creating yum repositories is as simple as putting some files on a webserver, and running the createrepo command. Creating apt repositories is much more time-consuming, and requires learning a number of disparate tools, and creating scripts to automate management of the repositories.

Package Management Winner
yum on CentOS, by a small margin, if you plan to host your own package repositories. If you have no need for your own repos, or are already familiar with apt, either as a user or developer, it is a tie.

Package Selection

Closely related to package management is package selection. In other words, how many packages are readily available for your OS from the system standard repositories, and how new are those packages? Here, there are some interesting differences in philosophy between the various systems, and those differences may help you choose.


CentOS package selection is the smallest, by far, of these three distributions, in the standard OS repositories. In the Virtualmin repositories, we have to fill in the gaps by providing a number of what we consider packages that are core to hosting service. It is missing things like ClamAV virus scanner for email processing, ProFTPd FTP server (among the most popular and more feature-filled FTP servers available), and others. This is an annoyance which the other two distros do not make you endure. CentOS has about 6,000 packages in the standard repository.

On the other hand, CentOS has the Fedora EPEL repositories, which provide Fedora packages rebuilt for CentOS. This expands the selection of available packages on CentOS with a couple thousand extra packages. One thing to keep in mind is that EPEL is not subject to the lifecycle promises of the official CentOS repositories, and is subject to volunteer contributions to keep the packages up to date (much like Debian). Most of the popular packages are pretty well-maintained, but I have occasionally seen security updates fall behind in the EPEL repos for some packages for older versions of CentOS, which can be worrying. I generally advise selectively enabling EPEL repositories, by using the includepkgs or exclude options within the repo configuration file. In this way, you’ll know exactly which packages have come from EPEL and which ones need extra caution as time passes to insure they are kept up to date and secure.

CentOS packages in the latest release also tend to be older than those found in the latest Ubuntu release. Often this merely depends on who has had a more recent major version release, and for the moment CentOS 7 has some newer packages than the latest Ubuntu 14.04 LTS release. But, the latter also has newer versions of some important packages despite being released earlier.

CentOS is particularly strong (or weak, depending on how you look at it) about keeping the same version of packages throughout the entire lifecycle of the OS release. Thus, CentOS 7 will have Apache version 2.4.6 throughout the entire ten year life of the OS. Security updates will be applied as patches to that version of Apache, rather than adding new versions to the repository. This insures compatibility throughout the entire lifecycle, and makes it much more predictable that your server will continue to function through security updates. However, it also insures that in five years you’ll be wishing for newer versions of PHP, Ruby, Perl, Python, MySQL or MariaDB, and Apache. It is a double-edged sword and for some people the cost is too high.

In addition to the EPEL package repositories, there is also the Software Collections (SCL) repository. This repository includes updated versions of popular software, mostly programming languages and databases. There is currently SCL support for CentOS 6, but it is likely to be available for CentOS 7 as the packages found in CentOS 7 become more dated. This can allow you to continue to use an older OS version while still utilizing modern language and database versions. You can read more about the Software Collections in the CentOS Wiki.


Ubuntu, with all repositories enabled (including universe), has about 23,000 packages. As you can see, there are a lot more packages available for Ubuntu than CentOS. But, many of the less popular packages are considerably less well-maintained. Sticking to the core repositories (main and security) may be advisable, in the same way that avoiding general use of EPEL on CentOS is advised. It’s best to know your packages are being well-cared for and that lots of other people are using those packages, so bugs are found quickly.

In our Virtualmin repositories for Ubuntu, we don’t have to maintain any binary packages aside from our own programs, which is indicative of how well-equipped the standard Ubuntu repositories are for web hosting deployments. It is possible to install nearly anything you could want or need, and in a relatively recent version, on the latest Ubuntu release. Ubuntu is also less strict about keeping the same version, and more likely to provide multiple versions, of common packages, like Apache, PHP, and MySQL or MariaDB. This makes Ubuntu a favorite among developers who like to stay on the bleeding edge of web development tools like PHP, Ruby On Rails, Perl Dancer, Python Django, etc.

In short, Ubuntu has far more packages and generally more recent packages, than CentOS. Ubuntu usually has more recent packages than Debian stable releases, as well, and a better update policy in terms of stability. Ubuntu’s update policy is not a strict or predictable as that of CentOS, but it is unlikely you will run into compatibility problems between minor version changes that can happen on Ubuntu with some of the core hosting software.


Debian has the most packages in its standard repositories, with something along the lines of 23,000 packages. The popular packages tend to be well-maintained by a veritable army of volunteers and using excellent infrastructure to assure quality. However, many of the packages will be quite old, at any given time. And there is less assurance of compatibility between updates in Debian than in CentOS, or even the core Ubuntu repositories.

Given Debian’s short lifecycle vs CentOS, and Ubuntu’s ability to tap into the universe repository for access to roughly the same number and quality of packages as Debian, it is hard to argue that Debian leads in this category, even though historically its huge selection of packages was hard to beat. Debian’s stable release also tends to have somewhat older packages, even in the beginning of its lifecycle, which can be a negative for some deployments.

Package Selection Winner
Ubuntu, if sheer number and newness of packages is most important. Or, possibly CentOS, by a small margin, if you prefer stability over newness, and prefer to insure your software never stops working due to incompatible changes in software running on the system.


I recommend not upgrading servers to entirely new versions of the OS frequently, generally speaking, since it can be time-consuming and it can introduce subtle malfunctions that can be hard to identify and fix. If you do need to upgrade, a valuable feature is the ability to upgrade without physical access to the system. This can be somewhat nerve-wracking, for servers you don’t have easy hands-on access to, but some distributions are better at it than others.

Debian and Ubuntu

apt has long been an accepted method of performing an OS upgrade on Debian, since long before Ubuntu even existed. The apt-get dist-upgrade command will handle not just dependency resolution, but it will also handle packages that have been made obsolete by newer packages or situations where various libraries have moved to new packages. This allows a system to be upgraded to a new version with very little disruption, and because it has been in use for many years, it is generally pretty reliable and a well-supported method of upgrading the system.

The process of upgrading Debian or Ubuntu using apt is quite similar, though in my experience Debian upgrades are historically smoother than Ubuntu upgrades, for a variety of reasons, but mostly because of the more conservative nature of Debian development, and the fact that more Debian users are in various states of running newer and older software together (mixing and matching of repositories on Debian is more commonly done to get newer packages, and for development purposes), so community testing of various package versions within each system version is broader, if not deeper. This is a historic difference, based on my own experiences with Debian and Ubuntu upgrade, and may be alleviated by the much larger number of Ubuntu users today.

The important thing here, however, is that upgrades on Debian or Ubuntu are a relatively painless affair, at least when compared to CentOS.


Upgrading a CentOS system is more cumbersome. While it is possible to perform an OS upgrade with yum, it is not currently recommended or supported by the CentOS developers, so remote upgrades are very challenging. In fact, there isn’t even a very clear path for upgrading from CentOS 6 to CentOS 7 while sitting at the console. There are new tools in development for handling OS upgrades using yum, fedup on Fedora and redhat-upgrade-tool on RHEL/CentOS, which will likely eventually provide a reasonable upgrade process. Though, I have never seen an upgrade using this process work without significant manual correction of issues after the upgrade process completes. I would not trust this method to upgrade a remote system, unless I had KVM access, and remote hands available in the data center to handle inserting a rescue CD should it come to that.

In short, CentOS should be considered a “cannot upgrade” OS for servers in remote locations. The only tools for performing remote upgrades are very early alpha quality at best and are not recommended by their developers for production systems.

Upgrade Winner
Debian, because of its long history of users upgrading via apt and its ideology of mixing and matching packages from various repositories, relying on the dependency metadata of the packages to allow them to reliably interoperate. Ubuntu provides a reasonable upgrade path using the same mechanism, and is a very close second. CentOS isn’t even in the game, and cannot be upgraded remotely via any reasonable mechanism.


Ordinarily, I don’t recommend looking to popularity as a major deciding factor in choosing software, though for a variety of reasons, it does make sense to choose tools that are used by a reasonably large community. This is especially true for Open Source software.

Popular software will have more people using it, more people asking and answering questions about it online, and more people who are experts or at least comfortable working with it. This insures you can get help when you need it, you’ll be able to find plentiful documentation, and you’ll be able to hire people with expertise if you get stuck in a situation that’s over your head.

On this front, things have shifted quite a bit in the past several years. CentOS once ruled the web server market, with a huge market share advantage. Among our many thousand Virtualmin installations, CentOS accounted for approximately 85%. Today, CentOS is still the most popular web server OS, with about 50% market share (depending on who you ask and which specific niche you’re talking about, this may vary quite a bit), with Ubuntu following closely behind with 30% (and in some niches it may even hold a larger share than CentOS), and Debian following behind with about 15%.

For the majority of users, any of these three systems has achieved the minimum level of popularity necessary to insure you have a large and vibrant community of developers, users, authors, and freelancers, available to make the system work well in a wide variety of use-cases. I would not hesitate to recommend any of these systems, but would caution going outside of these three systems, because the user base of everything else is so very small.

Popularity Winner
CentOS, but it probably doesn’t matter all that much. With a 50% market share, you’re most likely to find the help you need when problems or question arise. But, Ubuntu and Debian also have very large and active communities, and you’re likely to find all the help and documentation you need for any of them.

Your Experience Level

This one won’t have a winner that I can choose for you, and simply has to be decided based on your own experience level. And, it may even be the most important single factor. If you are an expert on one distribution, but a novice on the others, you would almost certainly want to choose the one you know over the ones you don’t (unless others on your team have different expertise).

If you use Ubuntu on your desktop or laptop machine, you may find that using an Ubuntu LTS release on the server provides the least friction; you can develop in roughly the same environment you’ll be deploying into. Likewise, if you are a Fedora user on the desktop, CentOS is an obvious choice, because they share the same philosophy, package manager, and many of the same packages (Fedora can be seen as the rapidly moving development version of CentOS, and most packages and policies that find their way into CentOS began by being introduced into Fedora a year or more before).

Of course, if you have no strong existing preference, it would be wise to consider your needs for your systems and compare the other factors in this article.

Experience Winner
You! You get to choose from some of the most amazing feats of software engineering ever to exist, representing millions of person-hours of development, and they’re all free and Open Source. We live in amazing times, don’t we?

Some Final Thoughts

If you’ve made it this far, congratulations! You now know I like all three of the most popular web server Linux distributions quite a bit, and think you will probably be pretty happy with any of them. You also know that CentOS is possibly the “safest” choice for new users, by virtue of being so popular on servers, but that Ubuntu is also a fine choice, especially if you use Ubuntu on the desktop.

But, let’s talk about the other distributions out there for a moment. There are some excellent, but less popular distributions, some of them even have a reasonable life cycle and a good package manager with good package selection and upgrade process. I won’t start naming them here, as the list could grow quite long. I do think that if you have a Linux distribution that you are extremely fond of, and more importantly, extremely familiar with, and the rest of your team shares that enthusiasm and experience, you may be best off choosing what you know, as long as you do the research and make sure the lifecycle is reasonable (three years is a little short, but most folks would be OK with a 5 year lifecycle, especially if upgrading is reasonably painless).

There are also a variety of special purpose distributions out there that may play a role in your deployment, if your server’s purpose matches that of the distribution. Some good examples of this include CoreOS or Boot2docker, which are very small distributions designed just for launching Docker containers, and those containers would include a more standard Linux distribution. Those are outside of the scope of this particular article, but I’ll talk more about them in a future post.

And, if you’ll be installing the Virtualmin control panel on the system (and I think you should, because it’s the most powerful Open Source control panel and also has a well-supported commercial version), you’ll want to make sure it’s one of our Grade A Supported operating systems.

Virtualmin Memory Usage (and Other Tales of Wonder and Woe!)

I’ve noticed over the years that one of the most common sources of confusion for new Virtualmin users, or users who are new to Linux and web hosting in general, is memory usage. I’ve written up documentation about Virtualmin on Low Memory Systems in the past, but it focuses mostly on helping folks with low memory systems reduce memory usage of their Virtualmin (and all of its related packages, like Apache, PHP, MySQL, and Postfix) installation. It goes into interesting detail about Webmin memory usage, library caching in Virtualmin, etc. but doesn’t go into things like the memory usage of various services in a Virtualmin (or any LAMP stack) system. This article will briefly address each of these subjects and provide real world numbers for how much memory one should expect a Virtualmin installation to require.

A side story in all of this is how Virtualmin compares to other web hosting control panels. Somehow, this is considered interesting data for some folks, though I can’t really fathom why, given the huge differences in functionality available, particularly when comparing control panels with extremely limited web-focused functionality with full-featured control panels (like Virtualmin, cPanel, or Plesk) that provide mail processing with anti-virus and spam filtering, database management, etc.  But, it comes up a lot. So, let’s get some hard numbers for Virtualmin and talk about where those numbers come from. If anyone happens to have data about memory usage of other control panels, feel free to post them in the comments (though, I doubt any control panel will use vastly more or less memory than Virtualmin, unless it’s written in Java, or something similar).

Where does the memory go‽

The first thing I want to do is break down memory usage in a production Virtualmin system, and talk about which components require large amounts of memory, and which ones can be reduced through options or disabling features.

Virtualmin system top

top sorted by memory usage on a very busy 8GB server

The above image is the output of the top command on a Virtualmin system that has several active websites, including a large Drupal deployment (the one for Virtualmin.com which has ~30,000 registered users, ~100,000 posts and comments, and receives about 100,000 visitors a month, at time of writing) and all of our software download repositories. As you can see the system has 8GB of RAM and 2GB of swap memory. Here’s what we see is using the majority of memory on this system, in order of size:

  • mysqld – This is the MySQL server process. It is configured with quite large buffers and cache settings, in order to maximize performance for our Drupal instance and other applications that access the database, such as the Virtualmin Technical Support module (which can create tickets in our issue tracker). This is the largest single process on the system, which is likely to be true on most systems with large database-backed websites. It has 2.3GB of memory allocated, though all but 418MB is not necessarily dedicated to this process or in physical RAM. See the note below about virtual vs. resident size.
  • clamd – This one always surprises people, and folks often forget about it when calculating their expected memory usage. ClamAV is very demanding, because it loads a large database of virus signatures into memory. Virtualmin allows it to be configured as either a daemon or a standalone executable…but the standalone version is extremely slow to start, and causes a spike of both CPU and disk activity when starting. So, if you plan to process mail (on any system, regardless of whether Virtualmin is involved), you should expect to give up a few hundred megabytes to the spam and AV filtering stack. The ClamAV server has 305MB resident size.
  • php-cgi – There are several of these, and they represent the pool of mod_fcgid spawned PHP processes that are serving the Drupal website. They are owned by user “virtualmin”, because we use suexec on this system, and the site in question is virtualmin.com, and the username for that account is virtualmin. The PHP process is quite large here, larger than most, for a few reasons. Primarily, it is because we make use of a large number of Drupal modules, and some of those modules are quite demanding, so we’ve had to increase PHP memory limits for this user. These processes have ~135MB resident size, and much larger virtual size, but all of the virtual memory usage is shared across every php-cgi process for every user.
  • lookup-domain-daemon.pl – This is part of the mail processing stack, and is a server provided by Virtualmin. It allows SpamAssassin and ClamAV to have user-level and domain-level configuration settings, and allows some types of configuration for these services to be modified by end users safely. This process is 55MB with another ~40MB shared with other processes.
  • spamd – The SpamAssassin server. See, I told you mail processing was heavy! At ~50MB for each of the SpamAssassin child processes, this adds up on a heavily loaded system.
  • perl – Finally, this is actually the Webmin/Virtualmin process! My system currently has library caching fully enabled, and the total virtual process size is ~135MB (this would be smaller on a 32 bit system), and a resident size of 46M. If I were on a low-memory system, I would disable pre-caching, and Virtualmin would shrink to about 15MB (less on a 32 bit system). This can be set in Virtualmin->System Settings->Preload Virtualmin libraries at startup? The options are “All Libraries”, “Only Core”, and “No”, which will cause the Webmin process to be 40-45MB resident, 20-25MB resident, or 12-17MB resident,depending on whether the system is 32 or 64 bit.
  • named – This is the BIND DNS server. It’s memory usage is quite modest compared to a lot of the other services on this system, and is probably never something one would worry about tuning, unless you serve a very high volume of DNS requests.One thing to bear in mind, however, is that if you have enabled the caching nameserver features of BIND, and many users are using it for DNS service, the process size could grow quite large. We recommend only enabling recursive lookups for the Virtualmin server itself (or, possibly even better, forwarding those recursive lookups to another server).
  • httpd – This is the pool of Apache web server processes. Notice the virtual size is quite large, while the resident size is quite small. Much of the memory usage of these processes is shared across all of them (of which there are probably 100+ on my system at any given time, due to number of concurrent users). The size of these processes is determined mostly by the number of modules you have installed. But, even on this system, with a number of modules enabled and actively used, the resident size is only 9MB per process. Given my 3.4 GB of currently free memory, Apache could spawn over 300 additional processes (beyond the 100 or more already running) without bumping into the memory limitations of this system. Apache often gets accused of being a memory hog compared to other web servers, but that’s often an unfair comparison between an Apache with a bunch of large modules (like mod_php, or mod_perl, neither of which are needed for most Virtualmin systems) and a stripped down lightweight server, like nginx that simply doesn’t have any large modules that can be enabled.

Note: VIRT and RES are indicators of the type of memory that has been allocated; VIRT includes the resident memory, as well as memory-mapped files on disk, shared libraries which share RAM with other processes, etc., while RES is the resident memory usage, which roughly reflects how mush RAM is dedicated to this process.

There are many other processes on this system, including the rest of the httpd processes, but these few processes already explain where the vast majority of memory on the system is going, and so we won’t dig any deeper into it for this story.

Just for fun, let’s see a somewhat smaller system’s memory usage:


Memory-sorted top output on a moderately loaded 4GB Virtualmin server

This is a ~4GB virtual machine, and I’ve temporarily disabled library pre-caching in Virtualmin, which makes the process size about 17MB (it is a 64 bit system). Since it’s so small, it doesn’t even show up in the list, when sorted by memory usage. In this case, the large processes start with MySQL, once again configured with somewhat large buffers and caches. Java shows up here, which is uncommon for me, since Java is such a best to work with, but I have a Jenkins CI instance running on this box. And, then the mail filtering stack is next, and slightly smaller than on the above system. I don’t have ClamAV running on this box, since the only email it processes is received by people running Linux and we don’t worry so much about viruses in email. And, then comes php-cgi, which is much smaller on this system, since it only runs moderately small WordPress instances, and a pretty hard-working MediaWiki installation for doxfer.webmin.com.

It’s also possible to run Virtuamin in a very small amount of memory, particularly if you don’t need to process mail on the system. We recommend at least 256MB for a 32 bit system, and 384MB for a 64 bit system, even if you won’t be running a mail stack. While Virtualmin itself doesn’t need more memory, the performance of most web applications would be pretty abysmal on anything less. MySQL performance is directly correlated with the amount of memory you can devote to it. Using nginx (which is also supported by Virtualmin) may help in reducing the needed memory usage, though a minimal Apache configuration won’t be much larger.


Virtualmin uses somewhere between 12MB and 46MB resident memory, and up to ~150MB virtual, depending on whether library caching is enabled and whether it is a 32 or 64 bit system.

If you’re processing mail with spam and antivirus, Virtualmin will, by default, also run a 45-55MB daemon for assisting with processing mail.

All of this is dwarfed by the actual services being managed by Virtualmin, like Apache, MySQL, ClamAV, SpamAssassin, Postfix,etc.

If you need to run Virtualmin on a very small-memory system, the best thing you can do is off-load email to some other system or service, since the full mail processing stack with SpamAssassin, ClamAV, Postfix, and Dovecot can easily add up to a few hundred MB.

Interesting Links

My favorite site to refer people to when they’re wondering about what memory usage information means on a Linux system is Linux Ate My RAM!

Support Webmin, get a sweet T-shirt!


Sweet Webmin threads!


It’s been a few years since we last took Webmin to a trade show (both because of the costs involved and the time required to do it right), and so we haven’t printed any new T-shirts in a long time. Something like five years. I have had to retired most of my very well-loved Webmin T-shirts because they had developed holes or I’d stained them while cooking or they were otherwise not fit for wearing in public. So, we decided to give Teespring a try, so we could buy shirts for ourselves, and also offer them up for folks who want to show their support for Webmin by wearing a sweet T-shirt. This is the most popular design we’ve ever printed in the most popular color (navy blue).

Back of the shirt just says, “root” in big monospaced type. And, that’s awesome.

Buy a Webmin T-shirt!

They’ll be printed on American Apparel crew T-shirts, and will be screen-printed, so they should look great for years to come. Remember that American Apparel shirts are cut a little smaller than usual, so we recommend you go for a size bigger than you normally buy. There’s also kid sizes, just in case your youngster wants to rep for Webmin.


Multi-account Twittering from the command line

While working on our migration from Joomla to Drupal over at Virtualmin.com, I’ve been keeping folks apprised of what I’m up to by posting the highlights to Twitter on our virtualmin Twitter account. So, I’ve had to switch back and forth quite a bit from my personal account. It also means I have to have another tab open quite frequently for Twitter. I looked around for a standalone Twitter client that supports multiple accounts, and found the pickings were pretty slim on Linux (a few Air clients run on Linux, but not very well, unfortunately). But, I got to thinking that a command line client would be ideal…so I searched the web, and found a brief article here with a simple one-liner to submit a tweet to Twitter. The fact that Twitter works with a one-line shell script is awesome, and explains why there are so many Twitter clients.

Anyway, I first thought, “I could just make copies of that for each account.” The I could call, “twitter-swelljoe” for my personal account and “twitter-virtualmin” for my business account. But that seemed wrong, somehow.

So, I wrote a version that accepts a command line argument of the username, and then sends the rest as a tweet.

curl -u $USER:$PASS -d status="$TWIT" http://twitter.com/statuses/update.xml>/dev/null

Nothing really fancy going on here, but the “shift” is saying, “drop the first token from the arguments list”, and then $@ says “give me the rest”. So, no quotes required for the tweet. I saved the filed as tw, and use it like so:

tw swelljoe Tweeting from the command line!
tw virtualmin Business-y kinds of tweets from the command line.

This assumes one password for all Twitter accounts, which is exactly what I do anyway (I use SuperGenPass, which always generates the same password when given the same URL and passphrase). You could instead store the passwords in an associative array (gotta be using Bash 4.0+ for that), something like this (I can’t test this, as I don’t have bash 4.0!):

declare -A PASS
curl -u $USER:${PASS[$USER]} -d status="$TWIT" http://twitter.com/statuses/update.xml>/dev/null

There are ways to fake associative arrays in older versions of bash, or use other tools, but since I don’t need multiple passwords, I’m going to just hand-wave that away.

I find it amusing that the first version of this twitter client, at 137 characters, is a valid tweet. It makes me want to go golfing and see how small I can make an even more functional version…at the very least, I can’t resist:

tw swelljoe `cat tw`

And then I have to go change my password after realizing I tweeted the real version rather than the demonstration version. Good thing nobody follows my personal tweets.

First impressions of the G1

Someone over on Hacker News asked me what I thought about the new G1 (aka “Google phone”), as compared to the iPhone (which Jamie wrote about here).  I figured those comments would be generally useful to folks thinking about picking between the two coolest phones available right now.  I’ve now had a full day to play with it, and I’ve spent several days tinkering with iPhones, so here’s my off-the-cuff “review” of the new G1.

The hardware is mildly disappointing. Not as nice “feeling” as the iPhone (though the 3G iPhone also now has a plastic back and doesn’t feel as nice as the first-gen iPhone, so I guess things are tough all over), or even as solid feeling as my old Sidekick. When taking the back off to put in the battery and SIM card, I felt like it was going to break. It didn’t, but it felt like it was. Likewise for the little covers for the data/charge port and the SD memory slot…they’re plastic and tiny and feel fragile. The one exception is the keypad, which feels very nice to me.  But, keeping in mind that the G1 is dramatically cheaper than the iPhone 3G ($179 up front and cheaper per month for the plan–though I haven’t actually looked at different plans, I just kept the one I’ve been on, which is about $10 cheaper than iPhone–so $240 over 2 years), I think it’s a great buy, and a whole lot of hardware for a very small price.  I think I probably would have been happy to pay $20 more for slightly sturdier construction, though, particularly on the port cover…I have a bad feeling that it’s going to get broken long before I’m ready to upgrade to a new phone.

Price wasn’t my primary deciding factor (openness was), but it certainly didn’t hurt that I wouldn’t have to pay more per month to upgrade to the G1, while I’d pay out $240 more over 2 years for the iPhone 3G for the same service. I’m going to see if I can drop down to a smaller voice plan–the Sidekick I had before had minimum plan requirements, but I didn’t see any such requirements when signing up for the G1, so I might even be able to save $5 or $10 per month with the G1 over the Sidekick, since I rarely use the phone. I use maybe 100 minutes per month of a 1000 minute plan.

Software-wise, it’s plain awesome. The lack of two-finger gestures, as found on the iPhone, is somewhat disappointing, but it’s no slower to use the popup magnifier buttons, once you’re accustomed to it. Dragging and such is smooth and accurate (seeimngly more accurate than the iPhone, for me, but maybe it just feels that way because the keypad means that I don’t have to use it for typing fiddly stuff, as on the iPhone), so I guess the touch screen is pretty good quality. Since the Android developers are some of the same folks that developed the Sidekick, everything feels very intuitive to me (where, as usual, “intuitive” means, “what I’m used to”). It also has Linux underneath everything, so that may also be a “comfort” factor for me, I’m not sure.

Basic phone features work well and are easy to use, it sounds good and clear, and having Google mail, contacts, calendar, etc. is sweet (my old phone couldn’t handle more than about 500 messages via IMAP, and I get more messages than that in a week, and obviously GMail just works great with practically infinite mail). Web pages look great, and browsing is fast. WiFi was quick and easy to setup. YouTube videos work great, both on 3G and on WiFi. It’s my understanding that T-Mobile’s 3G network is still somewhat small…so if you don’t live in the valley, your mileage may vary, but it works fine for me here in Mountain View.

I installed an Open Source ssh client off of the web called ConnectBot; no jail breaking required, which was a big issue for me with the iPhone. I don’t want to have to have permission to install arbitrary apps that I’ve written or someone else has written. I also installed Compare Anywhere, and a bubble game from the app store, and the quality of everything is really slick. Really impressive for a launch day catalog, especially since everything is free right now. I haven’t spent a lot of time with the apps on the iPhone, so I don’t have much to compare to. But, I’m excited to play with more stuff from the catalog, and I think I’m going to try my hand at writing an app or two for the platform.

Also, it worked right away when I plugged it into my Linux desktop. No futzing around with weird stuff to get music onto the device. The iPhone/iPod is a bitch in that regard. That one thing made me ecstatic in ways I haven’t felt over a device in a long time. Coming off of years of messing around with iPods and an iPhone, and it never quite working right, having a drag and drop music experience is miraculous. Take this with a grain of salt, as I may be strange. I find iTunes incredibly confusing and difficult to use, so on those occasions when I’ve given up on getting Linux to work and rebooted into Windows, or borrowed a friends Macbook, and run iTunes for the purpose of putting music onto the device, I’ve ended up spending a long time futzing around anyway, because I never could figure out what all the syncing options meant or how to use them…so, every time I would fiddle until something happened, and occasionally it would just end up deleting all of my songs either on the PC or the device, and I would give up in disgust. I also have trouble using several other Apple software products, and find them hard to use, so I could just be too stupid to be trusted with a computer without some adult supervision.

I think it will be interesting to see how this battle plays out in the market, and I’m certain that the G1 is just the opening salvo.  Even if the G1 “loses” the battle against iPhone (if selling out all available units even before the launch date can really be considered a “loss” in anyone’s book), and fails to pickup significant market share, there will be another battle in a few months, and the battles will come faster and more furiously as other manufacturers adopt Android.  And each one will wear Apple down, and will open a new front on which Apple will either have to engage or ignore.  For example, very low end phones will come into existence next year that provide smart phone capabilities, as will higher end devices with more capabilities or special purpose capabilities to answer niche markets.  Apple can’t fight on all fronts, and every niche it loses strengthens the value of the Android platform to developers, and thus to end users.  If Android sucked, like Windows Mobile, this wouldn’t be a cause for concern for the folks at Apple.  But Android doesn’t suck.  It’s really nice to use.  As nice as iPhone?  Maybe not…but getting better rapidly (if you tried the last developer release a few months ago, you’ll know that there have been a lot of improvements since then).

As with PC vs. Mac two decades ago, it’s a battle of two different ideologies.  On one side, you have openness: the ability for hardware makers to produce widely varying hardware while still providing the same basic user interface; and on the other, you have a single source eco-system: Apple designs every aspect of its products and can control every element of the user experience, down to the very applications that run on the platform.  Except, this time around, there are two additional elements: the telcos, and Open Source.  While the telcos are going to fight to keep all cell phones locked down pretty tightly, and try to insure that they can extract money for just about everything novel you want to do with it, the Open Source nature of Android is going to allow people to do things with mobile devices that have been impossible to date (even on very powerful, but locked down, devices like the iPhone).

To me, it looks like Apple is going to make the same mistakes they made with the Mac years ago: Treating their application developers poorly by competing with them unfairly or simply locking them out of the market, disrespecting users that want to use their devices in ways not imagined by Jobs and treating them not as customers but enemies, and finally, denial that price is a major factor in peoples purchasing decisions.  If it plays out as it did in the PC wars, Apple will find that they have fewer applications for their devices, and a steadily declining market share (even as actual sales continue to increase, since the smart phone market is growing rapidly, and all ships rise in a growing market).  Since it isn’t Apple vs. Microsoft, this time, but instead Apple vs. Open Source (and openness in general), I know which side I’m on.  I’ve had a tendency to pull for the scrappy underdog in the past, and Apple has very frequently been the scrappy underdog…but unfortunately it’s an underdog with a Napoleon complex, so it’s not exactly a good choice for replacing the old tyrant.  But, luckily, this time around, we have a wide open alternative…and it’s actually really good and really easy to use.  Maybe the “Open Source on the desktop” movement, to date, has all been preparation for this moment…the moment when Open Source can finally be a great option for regular, non-technical, consumers.  I think that’s pretty exciting.  And we’ll just have to wait and see if Apple winds up on the wrong side of history, again.

A couple of nice articles about Webmin and Virtualmin

I use Google’s awesome Alerts feature to keep an ear out for anything interesting on the web about some of my favorite topics.  Obviously, Webmin and Virtualmin are among those topics.

A great article about Webmin showed up today at Free Software Magazine, by Gary Richmond, that is a “bird’s eye view” of the subject, just touching on the high points.  In it, he gets to the heart of some of the things that make Webmin great, like it’s ability to edit configuration files directly and safely, rather than generating them from templates, as most similar configuration tools do.  All in all, a nice little “boss-friendly, while still being accurate” introduction to Webmin.

And Monday, the awesome blog The Next Web covered Virtualmin, the business and the project, kicking off their new series on profitable web businesses (rather than those that are running on dreams and VC cash) with this article: Companies who make money: Virtualmin. Lots of great stuff over there.

And, of course, no discussion of the blogosphere would be complete without mentioning the new browser from Google. I haven’t tried it yet, as it’s only available for Windows and I’m a Linux user when I’m working, but Jamie tells me that Webmin works just fine (expected) and there was only one quirky bit in the Virtualmin theme (surprising, since we test against Konqueror and Safari, both WebKit browsers, but easily fixed). So, if you’re running Windows, give it a go: Chrome. If you’re an IE user, please give it a go.

Creating an iPhone UI for Virtualmin

Introduction and History

Around the start of the year, I created a theme for Webmin designed to make it easier to access from mobile devices like smartphones and cellphones with simple built-in browsers. At the time I had a Treo 650, which had a very basic browser – certainly not powerful enough to render the standard Virtualmin or Webmin framed themes.

By using Webmin’s theming features, I was able to create a UI that used multiple levels of menus to access global and domain-level settings, instead of frames. The theme also changed the layouts of forms to be more vertical than horizontal, use fewer tables, and remove all use of Javascript and CSS for hiding sections.

This was released in the virtual-server-mobile theme package version 1.6, and all was good in the world. Anyone using it could now access all the features of Virtualmin from a very basic browser, and read mail in Usermin without having to rely on the awful IMAP implementations in most smartphones.

This shows what Virtualmin looked like on a Treo :

Then I bought an iPhone.

It has a much more capable browser, technically the equal of any desktop browser like Firefox or IE. The regular Webmin themes actually worked fine, although a fair bit of zooming is needed to use them. The mobile theme looked like crap, as it didn’t use any of the browser features like CSS and Javascript that the iPhone supports. Plus the layout rendered poorly due to the use of long lines of text that didn’t get wrapped at the browser’s screen width.

On the iPhone, the Create Alias page in mobile theme looked like this :

And in the regular Virtualmin theme, the Create Alias page looked like :

I mentioned this to Joe, and he pointed me at iUI, an awesome library of CSS and Javascript that allows developers to create websites that mimic the look of native iPhone applications. After trying out the demos and looking at their source code, it was clear that iUI would be perfect for creating an iPhone-specific theme.

It wasn’t quite as simple as I first thought, but after some hacking on both the theme code and iUI itself I was able to come up with a pretty good layout, as you can see in this screenshot of the Create Alias page :

Menu Implementation

Actually getting IUI to play nicely with the Webmin theming system was slightly more complex than I originally expected though. For example, an iPhone-style multi-level menu that slides to the left is implemented in IUI with HTML like :

<ul id='main' title='My Menu' selected='true'>
<li><a href='#menu1'>Submenu One</a></li>
<li><a href='#menu2'>Submenu Two</a></li>
<ul id='menu1' title='Submenu One'>
<li><a href='foo.cgi'>Foo</a></li>
<li><a href='bar.cgi'>Bar</a></li>
<ul id='menu2' title='Submenu Two'>
<li><a href='quux.cgi'>Quux</a></li>
<li><a href='#page'>Some page</a></li>
<div id='page' class='panel' title='Some page'>
Any HTML can go here.

As you might guess, CSS and Javascript are used to show only one menu or div at a time, even though they are all in the same HTML file. This is quite different to the way menus are usually created in Webmin.

To get this kind of HTML from the theme, I created an index.cgi that generates a large set of <ul> lists and <div> blocks containing all the Virtualmin domains, global settings, Webmin categories and modules. This is loaded by the iPhone when a user logs in, and allows quick navigation to without any additional page loads. For example, these screenshots show the path down to the Users and Groups module. Only the last requires an extra page load :

The index.cgi script is able to fetch all Webmin modules and categories with the functions get_visible_module_infos and list_categories, which are part of the core API. It also fetches Virtualmin domains with virtual_server::list_domains and global actions with virtual_server::get_all_global_links.

For example, the code that generates the menus of modules and categories looks roughly like :

my @modules = &get_visible_module_infos();
my %cats = &list_categories(\@modules);
print "<ul id='modules' title='Webmin Modules'>\n";
foreach my $c (sort { $b cmp $a } (keys %cats)) {
    print "<li><a href='#cat_$c'>$cats{$c}</a></li>\n";
foreach my $c (sort { $b cmp $a } (keys %cats)) {
    my @incat = grep { $_->{'category'} eq $c } @modules;
    print "<ul id='cat_$c' title='$cats{$c}'>\n";
    foreach my $m (sort { lc($a->{'desc'}) cmp lc($b->{'desc'}) } @incat) {
        print "<li><a href='$m->{'dir'}/' target=_self>$m->{'desc'}</a></li>\n";
    print "</ul>\n";
print "</ul>\n";

The actual IUI styling and menu navigation comes from CSS and Javascript files which are referenced on every page in the <head> section, generated by the theme’s theme_header function which overrides the Webmin header call.

Other Pages

Other pages within Webmin are generated using the regular CGI scripts, but with their HTML modified by the theme. This is done by overriding many of the ui_ family of functions, in particular those that generate forms with labels and input fields. Because the iPhone screen is relatively narrow, it is more suited to a layout in which all labels and inputs are arranged vertically, rather than the Webmin default that uses multiple columns.

For example, the theme_ui_table_row override function contains code like :

if ($label =~ /\S/) {
    $rv .= "<div class='webminTableName'>$label</div>\n";
$rv .= "<div class='webminTableValue'>$value</div>\n";

The label and value variables are the field label and input HTML respectively. The actual styling is done using CSS classes that were added to IUI for the theme. The same thing is done in functions that render multi-column tabs, tabs and other input elements generated with ui_ family functions.

The only downside to this approach is that not all Webmin modules have yet been converted to use the functions in ui-lib.pl, and so do not get the iPhone-style theming. However, I am working on a long-term project to convert all modules from manually generated HTML to the using the UI library functions.

Headers and Footers

In most Webmin themes, there are links at the bottom of each page back to previous pages in the heirarchy – for example, when editing a Unix group there is a link back to the list of all groups.

However, IUI puts the back link at the top of the page next to the title, as in native iPhone applications. Fortunately, CSS absolute positioning allows the theme to place this link at the top, even though it is only generated at the end of the HTML. The generated HTML for this looks like :

<div class='toolbar'>
<h1 id='pageTitle'></h1>
<a class='button indexButton' href='/useradmin/index.cgi?mode=groups' target=_self>Back</a>
<a class='button' href='/help.cgi/useradmin/edit_group'>Help</a>

The toolbar CSS class contains the magic attributes needed to position it at the top of the page, even though the theme outputs it last.