Doing Email Right (Part 1: SMTP)

picture of a mailbox made from an old V8 engine

Mail should be fast

Email service has a reputation, even among Linux-savvy users and web developers, as being extremely difficult to setup and maintain. The reputation may be well-earned, compared to setting up Apache or MySQL, but email doesn’t need to be something to fear. When you break it down into its component pieces, it becomes far less intimidating, or at least more comprehensible. And, though it seems from the outside that all of the pieces are cohesive, the pieces of a mail server are generally distinct and can be configured without much knowledge of the other pieces.

In this article, I’m going to explain the components of a mail server that can send and receive mail as well as deliver mail to local user mailboxes. Because of the depth of the subject I won’t go into detail about POP/IMAP service for clients like Thunderbird and Outlook, or spam and anti-virus filtering, or DNS for mail service (which is a quite complex subject in and of itself, particularly with DKIM and other email security features that use DNS), but I will cover those in additional articles soon.


The first thing to understand about “email service” is that it is not just one service. It is actually made up of several interconnected components, each of which communicates via one or more protocols to other services locally or remotely.

Message Transfer Agent (MTA)

The central component of any email system is the MTA, or Message Transfer Agent, aka Mail Transfer Agent. This is a server that communicates with clients and other servers using SMTP (Simple Mail Transfer Protocol). Some popular examples (in order of popularity and quality of current maintenance at the time of writing) include Postfix, Exim, Sendmail, and QMail. My examples in this article will mostly use Postfix, as it is likely the most popular option, has a good security record, good performance, and is easy to configure.

In its simplest form, the MTA either initiates or accepts an SMTP connection, in order to send or receive an email on behalf of a user. I won’t yet go into the details of the SMTP protocol, though it is a simple protocol when performing simple tasks. For now, it’s enough to know that an MTA like Postfix or Sendmail sends and receives mail via the SMTP protocol. I will show the entire life cycle of an email once I’ve discussed each component.

Mail Delivery Agent (MDA)

Once an email reaches its destination server, the MTA will hand it off to a Mail Delivery Agent, also called a Local Delivery Agent, for delivery to a local user mailbox. The mailbox could be in a variety of formats, Maildir and mbox being the most popular.

There are several mail delivery agents available for Linux and UNIX platforms, with procmail and maildrop being the most popular. The Dovecot and Cyrus mail suites both include a mail delivery agent, in addition to the POP3/IMAP servers they are best known for.

Mail User Agent (MUA)

The Mail User Agent, also simply called “mail client”, is the software used to read (and also send, though via two different protocols) email. Examples of this include Mozilla Thunderbird, Geary, Microsoft Outlook, and Evolution. Other, less obvious examples of mail clients include webmail clients like Roundcube, Squirrelmail, and Usermin, and classic command line clients like mutt and pine. The mail client will use the POP3 or IMAP protocol to retrieve mail from a POP/IMAP server, but will use SMTP to send mail.

There is another class of retrieval software called Mail Retrieval Agent, which includes fetchmail and getmail. This category of tool has become much less popular in recent years, due to evolving user needs, and won’t be covered in this article.


This is another type of MTA which provides a standardized interface for MUAs to retrieve mail from the mail server. Popular examples include Dovecot, Cyrus, and Courier. I will primarily cover Dovecot in this series, as it is among the most popular, very fast and efficient, flexible, easy to configure, and actively maintained. It’s also what we use in a default installation of Virtualmin, so it is what I know best.

A Very Short and Somewhat Boring Story About A Single Email

Once upon a time, the duck wanted to send an email to the fox. So, the duck opened up her email client (Thunderbird, of course), and clicked “Write”. She typed what he needed to say to the fox, and clicked “Send”.

Mail Client to Mail Server Communication

Her mail client then opened a connection to her Postfix mail server running on a server in a data center in Sheboygan, using SMTP. Once established, the conversation between the MUA (Thunderbird) and the MTA (Postfix) went something like this (output from Postfix is indicated with a “S”, as it is operating as the receiving server in this case, while output from Thunderbird is indicated with a “C”, as it is operating as the sending client in this scenario). The server response happens immediately upon opening a connection to port 25 (or possibly other ports in the case of SSL or TLS connections):

 S: 220 ESMTP Postfix
 S: 250 Hello, I am glad to meet you
 S: 250 Ok
 C: RCPT TO:<>
 S: 250 Ok
 S: 354 End data with <CR><LF>.<CR><LF>
 C: From: "Duck" <>
 C: To: "Fox" <>
 C: Date: Wed, 15 Apr 2015 16:02:43 -0500
 C: Subject: Fancy Clocks?
 C: Hello Fox,
 C: Do you have any fancy clocks?
 C: Your friend,
 C: Duck
 C: .
 S: 250 Ok: queued as 34A1E36F0
 S: 221 Bye

This almost looks like a casual conversation between server and client, but there is a reasonably well-defined protocol that determines the negotiation and sending of mail. In the above example, the server and the client both belong to the duck; the next section will cover the conversation between servers that delivers the mail from Duck’s server to a server belonging to Fox.

The first line is the server introducing itself, telling the client the status of the server (220, which means “service ready”), the name of the server (, the protocol that the server speaks (ESMTP is an acronym of Extended Simple Mail Transfer Protocol), and the MTA software (Postfix, in this case).

The next line is simply the client acknowledging the connection by identifying itself with its name ( “HELO” is simply the protocol defined way this is done.

In the third line, the server acknowledges (250, which means “Requested mail action okay, completed”) the connection and agrees to communicate with the client.

In the fourth line, the client tells the server that it has mail it would like to send, from sender

On the fifth line, the server confirms that it is willing to send mail on behalf of the sender. Postfix (and Sendmail, etc.) has a variety of ways to determine who can send mail and where they can send it from, which I’ll discuss in a later article. Likewise, for the sixth and seventh lines, the client tells the server who the mail is for, and the server agrees to accept it for sending.

The next several lines are the client saying it is ready to send mail data, the server agreeing to accept it (reply code 354, or “Start mail input; end with <CRLF>.<CRLF>”), and the mail data from the client.

Once the client is finished sending mail (indicated by a single “.” on a line by itself), the server queues the mail and gives the client a unique identification number for the mail. This mail ID number can be useful in tracking down mail server troubles. End users rarely need to concern themselves with such implementation details, but system administrators need to know how to troubleshoot mail delivery problems, and mail ID is extremely important for that purpose.

Once queued on the server, the connection is closed, and now the server will attempt to send it to the appropriate mail server for the destination address on the email (the RCPT TO field in the above session).

Server to Server Communication

The Postfix server that queued the mail in the previous section will use the SMTP protocol (again, but as a client rather than a server) to connect to the MTA that serves the domain. The sending MTA will use DNS to figure out what server to send the mail to. DNS is out of the scope of this article, but in my next article about DNS I’ll be covering various types of DNS records, including MX records. But, in the meantime, just know that mail servers query DNS for the MX (mail exchange) record of the receiving domain to find out what server to send mail to.

The conversation between the Postfix server on and the MTA (which could be Postfix or any of a wide variety of MTAs) on will look strikingly similar to the conversation between Thunderbird and Postfix in the previous example. This is unsurprising, since they both use the SMTP protocol to transmit mail.

So, we won’t go into detail on that part of the story of this particular email, but I will point out that we could find the email in the queue of the server (while it is in the queue, which could be less than a second on a lightly loaded system and if the receiving server is also fast) with the email ID we talked about earlier by using the mailq or postqueue -p command.

Since we’ll be covering POP3/IMAP and client-side communications in the next article in this series, I’m going to skip over that part of the story for now, and get right to the basic configuration needed to achieve SMTP-to-SMTP conversations, both sending and receiving, and we’ll pick up the riveting tale of email ID 34A1E36F0 in the next article.

Installing Postfix

On modern Linux distributions, installing software is incredibly easy. CentOS has yum, while Debian and Ubuntu have apt. Both are easy to use, automatically resolve dependencies, and will fetch and install software with very little user involvement.

To install Postfix on CentOS, run the following command:

# yum install postfix

Or, on Debian or Ubuntu:

# apt-get update
# apt-get install postfix

Assuming your system is connected to the Internet, and the package manager is functional, within a few seconds you’ll have Postfix installed and ready to configure.

A Basic Postfix Configuration

Postfix, freshly installed from OS-provided packages in CentOS, Debian, or Ubuntu (and most other modern Linux distributions) is usually ready to start, and requires little to no configuration. But, since simply saying, “Start Postfix” seems like a cop-out, and this article is for people who want to learn how email works on a deeper level than merely starting the daemon and hoping for the best, I’ll go into more detail about the most commonly changed options in Postfix, as well as some troubleshooting information.

The primary concerns with your initial configuration, the “Just get it working!” phase, is in getting mail accepted for your domain, allowing it to send on behalf of your domain, and delivering mail to the right place on the system, so those are the configuration directives I’ll cover now.

myorigin – This directive determines the origin domain name that is used for mail sent from this system. By default, it will be set to $myhostname, which is a special variable that refers to the hostname of the system (in our example above that would be But, it may be preferable, particularly if you have multiple sending mail servers, to use the parent domain name (, instead. To do that, you would change this option to:

myorigin = $mydomain

Note that this option only effects information that is used during SMTP communication. It does not limit or alter mail being sent through the server; the client sending email determines what the From address will be on any mail sent through your system, and it is that field that determines what appears in mail clients for the recipient of mail. If you only have one mail server for your domain, you probably do not need to change this option, and changing it adds some configuration complexity if you do change it (you have to also setup an alias for each user).

virtual_alias_maps – This directive  is used if your mail server will be receiving mail for users in multiple domains. e.g. if you are virtually hosting many mail domains on this server. I’ll go into more detail about virtual domains in a future article, but in the meantime, I wrote a brief tutorial about virtual hosting with Postfix for my Webmin book a few years back, which is still accurate in current Postfix versions. This option will generally point to a local map file, but could also use an LDAP directory or an SQL database on a remote machine, if you have multiple mail servers. In its simplest form, you’d configure it like this:

virtual_alias_maps = hash:/etc/postfix/virtual

Where hash indicates the type of map in use (it could also be ldap, mysql, or one of many other types. The file, /etc/postfix/virtual, will be a specially formatted list of email addresses and the destination for those addresses,

home_mailbox – This directive determines where user mail should be delivered on the system. Most modern systems use the Maildir format for mail delivery, as it can be more efficient for very large mailboxes than the next most popular option, mbox. It also makes incrementally backing up mail systems easier in some cases. If this option has a “/” at the end, it will use Maildir format. Otherwise, it will use mbox. So, to deliver mail to a Maildir in the users home directory, you’d configure it like this:

home_mailbox = Maildir/

That seems simple, and it is for very simple use cases, but in most environments, you will also want to enable spam and antivirus filtering, which generally requires more complexity, possibly using the mailbox_command option, which I will cover in a later article. In the meantime, this one option will allow Postfix to deliver mail to users on the system.

smtpd_recipient_restrictions – The final piece of the puzzle for a basic Postfix configuration is to insure that only the clients you want to allow to send mail can do so. You may have heard of an “open relay”, which is a term for any server that allows non-authenticated users to send mail through the server; being an open relay provides spammers with the ability to send their abusive email through your system (which can have disastrous consequences for your ability to deliver mail). I usually configure this option like so:

smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination

These options determine which networks can send through this system ($mynetworks is another special variable that means the networks that the server is connected directly to, by default, but can be expanded to include other networks), allows SASL authenticated users to send through this system (for clients that are not within $mynetworks), and rejects mail sent to destinations other than the ones this server accepts mail for. Note that these options are applied in order, so the first that matches is the one that applies to any given message. Thus, if the mail is coming from $mynetworks, it will not need to pass the other checks. SASL authentication is a complex subject and will be covered in a future article.

Starting Postfix and Troubleshooting

After configuring the few options that you choose to alter, you can start Postfix and test your installation.

On CentOS 7, Debian 8, and other systemd-equipped systems, you could start it with:

# systemctl start postfix

If there are any problems, you can check the systemd journal using the journalctl command for why. It’s also useful to look in the mail log (/var/log/maillog on CentOS and /var/log/mail.log on Debian/Ubuntu).

On Ubuntu 14.04 and other upstart-based systems, you’d use the start command:

# start postfix

If there are any problems, the mail.log or the system log in /var/log/messages can be consulted to see what went wrong.

Sending Mail

A simple way to test a newly installed mail server is to use a local mail client to send mail from the server itself. This provides a very simple test case; external connections and clients are not needed, so you can isolate any failures down to a few specific misconfigurations, rather than trying to guess which part of a very complex system has failed.

There are several ways to send email from the command line, but one that is always available on any Postfix or Sendmail system is the sendmail command (Postfix provides a sendmail program to provide compatibility with Sendmail), so I’ll use that:

$ sendmail -t <<EOF
 > subject:Testing
 > Tested!
 > EOF

If things go as expected, you should see the message appear in the inbox of the user you sent the message to. If they don’t go as expected, you can check the maillog or mail.log for clues about why.

Next Time

This is getting to be quite long, and we still haven’t really dug into client-side protocols, spam and anti-virus filtering, or DNS for mail service. But, fear not, in the next couple of weeks, I’ll be posting two new installments of this series on email, as well as another article about DNS, that cover just that.

Image credit: Kevin Dooley – Mailbox

DNS for Web Hosting (Or: What the heck is a glue record?)

Has there ever been a service that is more fraught with confusion, misconfiguration, and hair-pulling all-nighters than DNS? Maybe, but that was a rhetorical question. DNS confuses a lot of new system administrators (and, if you find yourself thinking about DNS, it means you’ve become a system administrator…but, don’t worry, I’m here to help you through your transition from normal person to keeper of UNIX wisdom).

In this article, I’m going to show you how to do some interesting things with DNS, which are all the steps you need to take for springing a brand new domain name into existence so you can put your awesome website on it! Here’s what this article will teach you how to do:

  • Search for and register a new domain at (steps will be very similar at any registrar, but Namecheap is quite good and low-cost)
  • Configure initial records using the registrars name servers
  • Setup a new name server using BIND
  • Setup a new zone with address and name server (NS) records
  • Setup glue records at the registrar
  • Test your new DNS zone using command line tools

What I won’t cover in this article is setting up BIND using Webmin. I’ve written about that on a few occasions over the years, and the most recent writings are in the Webmin wiki in the BIND DNS Server module documentation , as well as the BIND tutorials section. Jamie also wrote an introduction to the Domain Name System. This post is somewhat more geared toward beginners than most of those, and also covers things that aren’t really within the scope of Webmin, like registering a name and setting up glue records at the registrar. But, once you’ve wrapped your head around the basics of DNS, you may find that Webmin (and the related documentation) can make your job administering DNS servers a little easier and faster, particularly if you have a lot of zones to manage.

Search and Register a Domain Name

While picking the perfect domain name is out of the scope of this article, there are many domain-picker tools on the web to help you out. I tend to prefer to just brainstorm for an hour or two on the theme of my business or project, and come up with a half dozen or more good enough names.

Names should be as short as you can find (but don’t worry about getting a <5 letter domain, because a lot of those are already taken, even the nonsense words). Names should probably not use special characters, though the dash, “-“, is legal in domain names and is sometimes used, even for business sites, but it makes your domain harder to type on mobile devices. Making it difficult to type on a mobile device is a big negative, because more people today browse the web on mobile devices than on traditional computers. Numbers should also be avoided if possible for the same reason. Unicode characters can be used, but you shouldn’t use them unless you are serving a market that regularly uses Unicode domain names, such as China. Setting up Unicode domain names using the International Domain Name (IDN) standard won’t be covered here, but I may address it in a future post.

Once you’ve settled on some possible names or used a domain finding tool to find the name you want and that you know is available, create an account at Namecheap, or your preferred registrar. Once logged in, click on Domains->Registration.

Namecheap registration form

Enter the first part of the domain name you want. I want, because I’m working on a new book about system administration for web servers and Linux For The Web is a pretty snappy working title. You can leave off the .com, and Namecheap will list all of the popular Top Level Domains (like .com, .org, .net, .io, .biz, .co, etc.), and you can see whether they’re available. In my case, was available, so I grabbed it!

If the name you want isn’t available in your preferred TLD, don’t despair. Other TLDs are quite commonly used these days, and users aren’t terribly confused by an odd TLD. It’s true that a .com is preferred for businesses and such, and if it’s important to you to have a .com, you’ll need to try some of the other names you brainstormed earlier, or find one you can buy from its current owner.

Once you select the domain you want, you can go through the rest of the checkout process. Make sure all of your contact information is the information you want to have on your public whois record (or choose to use the privacy features provided by most registrars including Namecheap). The whois record is normally public and visible to anyone on the Internet; we will talk about how to query whois records using command line tools later in this article. Most registrars will act as a proxy on your behalf and provide their contact information instead of yours, and they will forward email to you when they receive any intended for the owner of the domain. This is usually a low cost service, but I rarely bother with it, as my information is all over the Internet already.

After completing the registration process, and paying for the domain, you can configure some initial records. You have a couple of options when setting up DNS: Many registrars will host your DNS zones for you, which is simple and requires very little back end knowledge; or, you can delegate the zone to your hosting provider DNS servers; or, and this is my eventual goal in this article, you can host them on your own DNS server running BIND. We’ll talk about the advantages and disadvantages of each approach as we go along.

Creating Host Records At The Registrar

The simplest way to get a domain name functioning is often to let your registrar handle DNS. This can be sub-optimal for a number of reasons, but it’ll get you started without having to install and configure your own DNS server. I will talk more about why this might be limiting, and I will show you how to setup your own name server later in this article.

At Namecheap, it is not immediately obvious where to go, as the label for the option may be confusing if you don’t know the terminology. To edit DNS records at Namecheap, click on the domain in the list of domains. This will take you to the dashboard for that domain name.

Look in the left menu for the item labeled All Host Records and click it.

All Host Records image

Clicking All Host Records at Namecheap

This will take you to a simple UI for creating new host records in your zone. To start with, it will only have a couple of records, plus the default mail forwarding option selected. I’ve edited my zone to look like the following, and I’ll explain below what each field and value means.

Domain records configuration at Namecheap

Domain records at

These two items, @ and www, are default records that are pre-configured to point to the Namecheap parking service page. We want to change this immediately, if not sooner, to something we own and get value from, even if it’s just our own landing page that says “Coming Soon!”.

The @ symbol simply means, “the domain name itself”, in this case I have filled in the IP of my web server in the IP Address/URL field. Setting up a web server to listen on this address won’t be covered here, but a later article will walk you through how to do it, and there’s a lot of resources on the web to help as well (or you could install the Open Source Virtualmin control panel and it would do most of the work for you).

The Record Type field indicates the type of record this will be. The DNS standard has defined a number of record types to help clients and other DNS servers find specific information about a zone, such as addresses, mail server names, name server names, etc. I will talk more about some of the types of records in a moment, but for now we only care about A records, or Address records.

An Address Record is a record type that maps a name, like “”, to an address, like

You may also see an automatically-generated SPF record that is created by Namecheap when you choose to use their mail forwarding service. We won’t be keeping this for long, but it doesn’t hurt anything to leave it alone.

For now, you can save your new records.

Now, we’d like to be able to test this, but it can take as long as a couple of hours for name registrars DNS servers to update, so we probably can’t actually test immediately. But, if you take a coffee/tea break, or have a walk around the block, you might be able to come back and see it working. Just for fun, here’s the basic testing tools for DNS:

On Linux systems, you would use the host command (which is provided by the bind-utils package on CentOS):

[joe@alice ~]$ host has address mail is handled by 20 mail is handled by 10

The host command automatically includes MX records, by default. We can ignore that information for the time being. What’s important is that we’re getting back the address I put into the A record for @ earlier.

On a Windows system, you would use the nslookup command:

Default server:

Cool, it looks like everything is working as expected. If you really wanted to keep things simple, you could stop here. This is already providing basic name service for your domain, and it will point users to your address when they browse to your domain name. That was pretty easy, huh? But, most people with serious aspirations to build awesome things on the web will want to have more complete control of their DNS information. Or, if you’re somewhere in the middle, and just want to add a mail server, so you can send and receive mail, the Namecheap interface makes that pretty easy. But, for this article, I’d like to skip that and move right on to setting up our own name servers.

Delegation Is Delectable!

Since you continued reading, I’ll assume you want all the power and responsibility that running your own DNS server provides. So, we have to do something pretty tricky: Tell the world how to find addresses in our domain.

Does that seem simple? If so, that probably means I haven’t explained enough about how DNS works.

In its simplest form, DNS is provided by a hierarchy of name servers on the Internet, with the root name servers at the top of the hierarchy. The root name servers delegate requests for zones (a zone is a collection of records under one domain name) to the authoritative name server for that zone. The root name server information is distributed as a list (technically, modern systems have a hints file, which just tells the DNS resolver client how to find the full list of root name servers and the zones they are authoritative for). The root name servers very rarely change IP addresses, and are known to clients because the lists are distributed with operating systems. You can see what the hint file looks like at the IANA website. You don’t need to understand the contents of this file yet.

A name lookup request begins with the DNS resolver on your computer or device breaking up the name into its components, beginning with the top-level domain. It will then request the authoritative server for the next level of the domain from the root name server for the top-level domain, and each level of the name will be queried until a server that is authoritative for that name is found. In my case, with the domain, the top-level domain is com, and so a client would ask the servers for the com domain about the name linuxfortheweb within the com zone. The root name server for the com zone would then delegate the request to the name servers that are authoritative for my name, which based on the steps we took above are currently the name servers at Namecheap. How does it know what servers to delegate that request to? This the job of the domain registrar. They provide the delegation information to the root name servers.

At this point, the request path is pretty simple. The client checks the root name server, the request is delegated to the Namecheap servers, the client asks one of the registrar servers (and trying a different one if the first doesn’t respond in a timely fashion), and the registrar server replies by sending back the requested record (since we’re trying to keep this simple, we’re talking about an address record mapping the name to the IP address

Now we want to delegate that one step further, and have the client resolver contact our server. For that we need the so-called glue record to point to our name servers. “But, but, but I don’t have any name servers!” I hear you cry. And, to that I say, “Not yet. But, you will.”

Go Forth And Install BIND

Now, you’re gonna get serious about being system administrators by setting up your own name server. You’ll need a server, or, rather, two servers (though you can fake it with just one as long as you have two IP addresses for it). Nearly any Linux or BSD server will do, as BIND is not a very resource intensive service, as long as it is only answering requests for your own zones. You could use a virtual machine or a physical server. As long as it has a reliable always-on connection to the Internet and a fixed IP address, you can use it as a name server. I have three virtual machines specifically for DNS service, and two are configured to automatically be slave servers to the first one. I will show you how to do that in a later article.

Start by installing the BIND package for your OS.

On CentOS/RHEL or other yum-based operating systems:

# yum install bind

Or, on Debian/Ubuntu:

# apt-get install bind9

Or, on FreeBSD (assuming you are using pkgng, the new binary package manager):

# pkg install bind9

Or, on Solaris (assuming you have setup to use OpenCSW):

# pkgutil -i bind9

Now, edit the named.conf using your favorite text editor (I like vim!). This file may be in a variety of places, depending on your OS; on CentOS it is in /etc/named.conf. There will be a lot of stuff in the configuration file that is beyond the scope of this article, but at the bottom add the following (modified to suit your domain and other name server IP addresses, of course):

zone "" {
  type master;
  file "/var/named/";
  also-notify {;
  allow-transfer {;
  notify yes;
Be careful to adjust the file paths to suit your OS. My example is on a CentOS 7 system. I believe Debian and Ubuntu will also put the BIND hosts files into /var/named, but it’s also possible to run BIND in a chroot, and some operating systems may use different standard locations. It’s nice to do things in the way that is common for your OS so that other administrators will know where to look for things.

This snippet of code instantiates a new zone called

It is a master zone, which means it will be the source from which any other slaves configured later get their information.

The file directive instructs BIND where to find the host records file for this zone.

The also-notify directive indicates that this server will notify another DNS server at the provided IP address of changes to the zone.

The allow-transfer directive indicates that this server should allow the server at the provided IP address to request a transfer of this zone. The combination of this option and the previous are necessary steps to permit another DNS server to act as a slave for this zone.

The notify directive instructs BIND to notify the servers indicated in the also-notify section when changes to the zone have been made.

Next, edit the zone file (/var/named/ in my example), and insert the following:

$ttl 38400 IN SOA (
38400 )

This is the first part of the zone file, sort of a preamble. It includes information that applies to the zone itself. Line-by-line, this snippet contains the following:

$ttl– The default time-to-live for this zone. If not specified in an individual record, this will be the time-to-live for that record. Other name servers may cache records from this zone for this length of time (in seconds in this case), or as long as the individual record TTL, whichever is longer.

SOA– The SOA, or Start Of Authority, recourse record is actually a special record type (like A records we already discussed some). Because it starts with, this SOA record applies to the zone. If other subdomains within this zone were to be delegated to other servers, or other zone files on this system, they would have their own SOA. The IN simply stands for Internet and can generally be ignored, as the vast majority of records you deal with (maybe ever) will be IN records.

The next two fields within the SOA record are the primary name server (generally the master name server), and the email address of the administrator of the zone, with the @ symbol converted to a . (period), since @ has special significance in a zone file.

Then the series of numbers are, in the order in which they appear:

  1. Serial number – A unique identifying number for this zone. It must always increment (get higher) every time the zone is updated. A popular convention is to use a year:month:day:number of modifications that day format.
  2. Time-to-refresh – How often a secondary, or slave server checks its zone database files against a master server.
  3. Time-to-retry – How long a secondary server waits before trying to contact a master server, after a failed transfer.
  4. Time-to-expire – When the secondary server will discard all zone data, if the master cannot be reached.
  5. Minimum-TTL – In BIND 9 the minimum time is used to define how long a negative answer is cached. Old BIND versions used this value differently, but you shouldn’t be using a BIND version that old.

These can generally simply be ignored, as long as they are something reasonably sane, and serial number always increases. But, I wanted you to know what they mean, so they aren’t intimidating. They are all necessary, even if we don’t have to think hard about them.

The next part of the zone hosts file is:

@                        IN A  IN A IN A
@                        IN MX 5  IN A  IN A
@                        IN NS
@                        IN NS

These are the various mandatory records for this zone. This is pretty much the minimum set of records you’ll need for a full-featured web hosting server and for a domain that you accept email for.

Breaking it down, the A records are, as previously mentioned, Address records. They indicate the IP address of the hostname in question (remember that @ is merely shorthand for the zone name itself, or in this case).

The MX record indicates that mail for @ ( will be accepted by a host named coincidentally shares an IP with the web server, but doesn’t need to). The number 5 is a priority. If we had multiple MX records, we could indicate the order in which they will be attempted. The lower the number, the higher its priority; i.e. an MX record with a priority of 10 would act as a backup to the one with a 5, and would only be contacted if the priority 5 server does not respond.

The final type of record in this snippet is the NS, or name server, record. This is used to notify clients of which servers to contact to resolve name within this zone (@), but it could also be used to delegate subdomains into other zones served by other name servers. This information should match the glue records at your registrar, which we will cover in more detail in the next section.

One other useful thing to know about BIND hosts files: A name without a . (period) at the end will have the zone name appended to it. So, mail would become There is no real reason to prefer one form or the other, but I personally like to use the full name. Whatever you do, you should do it consistently, so it is easier to read and easier for future administrators (including yourself!) to make sense of it quickly.

There are several other types of record, and several other components that could be included in this file, but I will save them for a later article. The purpose of this article is to get your name server up and running with a minimum configuration for web hosting. And, just the lines in the two snippets I’ve provided above (adjusted appropriately for your domain and your IP address(es)) will do the job.

You can, at this point, save your work, and start up your name server.

On CentOS 7 and new versions of Debian and Ubuntu (whenever they begin shipping systemd), you can do that with:

# systemctl start named

On CentOS 6 and below, you would use:

# service named start

On current and older Debian/Ubuntu versions:

# /etc/init.d/bind9 restart

On FreeBSD:

# /usr/local/etc/rc.d/ start

On Solaris:

# srvadm enable bind

A Little More Testing

Let’s return to our basic testing tools, and see if our new DNS server is actually answering queries for this zone. We’ll use one extra option to the host command to force it to query the local server rather than going out and checking with the root name server and the registrar (since we haven’t told them about our new server yet).

$ host localhost
Using domain server:
Name: localhost
Aliases: has address mail is handled by 5

Excellent! That’s exactly what we want to see. Only one more step to bring our first name server online. (And a few more steps to setup a slave DNS server, but that will be for another day and another post.)

This Is Where The Glue Records Come In

OK, now that you’ve configured a name server, you can tell the world how to use it by configuring the glue records at your registrar. I’ll also finally explain why I created those two extra A records for and very early in this article.

First, what is a glue record? It is a record that allows your name servers for your domain to be within the zone itself, because it is a record stored at your domain registry and contains the IP addresses of the name servers. For example, if I wanted my domain names within to be hosted on and I would need glue records at Namecheap to tell clients what the IP addresses were for those names. Otherwise, there would be a loop when the client wants to look up the name, that would go something like this:

Client to root name servers: What’s the name server for the zone?

Root Name Server:

Client to root name servers: That’s a name in the zone I was asking about! Now what?

Glue records allow the registrar to provide the IP information to the client, breaking this loop and allowing the client to know how to contact the name servers for this domain.

Some registrars will allow you to simply create these records by telling them what the IP address for each of your name servers is. But, some will only allow you to enter a name…so, you’re in the tricky situation of needing DNS service before you can set up DNS service! Luckily, Namecheap allows you to enter names and their IP addresses when creating glue records, and then allows you to assign those as the name servers authoritative for your domain. This makes the process pretty painless, if still a little confusing for newbies.

Creating Glue Records At Namecheap


In the Namecheap dashboard, click on the domain name to open the dashboard for that domain, and click the menu item labeled Nameserver Registration in the left menu.

This will open a simple form, where you can associated name server names within your domain to IP addresses.

Namecheap glue record editing form

Editing glue records at namecheap

Here, you’ll fill in the form IP addresses of your name servers. You’ll need at least two name servers. You can “fake” it by having two names on the same DNS server, but if you’re resorting to that, it may be recommended to stick with letting your registrar or a third party DNS server provide your DNS service until your needs have grown to support two DNS servers.

After you’ve saved this, and given it a little time to propagate, your DNS servers should be live. Take a break for a little while and come back for the final section where we’ll go through some tools to test our new DNS server(s).

Test Time!

Check the Glue Records

First, check to be sure the world now knows about your DNS servers. You can use a variety of tools for this, but I usually just use the whois command.

$ whois

This will produce a long list of information, including contact details for your zone, the registrar information, and registration and expiration date. For our purposes, we just want the field labeled Name Server:.


Success! I now see that the glue records are active. You can also use the dig command with the NS option to query any of the root name servers for your TLD to find out name server information. In a later post I’ll talk more about DNS troubleshooting tools and tactics, as well as other DNS-related topics.

Check the NS Records

The host command used earlier has a number of options for easily testing a variety of aspects of DNS. To check name server records, the -t ns option can be used:

$ host -t ns name server name server

Once again, we have the results we were expecting. Cool!

So, let’s make sure those servers are responding as they should be:

$ host
Using domain server:
Aliases: has address

Pow! Just what we wanted to see.

Alright, this has been quite a long article, but hopefully you’ll find setting up a new name server a little less intimidating. In the next installment on DNS, I’ll cover setting up slave servers, and touch on some other record types. I’ll also talk about some of the more advanced DNS management features of Webmin, including automatic slave server configuration.

Choosing a Linux Distribution for Your Web Server



Several years ago, I wrote a post about choosing a Linux distribution for a web server. It’s be so long that I don’t even remember where I posted it (so I, unfortunately can’t link to it), so it’s probably time to revisit the subject, as it does come up pretty frequently in our forums and in conversations with customers. The choice is somewhat more obvious today than it was back then, and I recall I covered at least five distributions (and I believe Solaris and FreeBSD) in that previous article. In this article, the leaders in the server operating system market are pretty clear, at least for Open Source platform web deployment, such as node.js, Ruby, Python, PHP, Perl, or Go. Because there are clear market leaders, I’m going to focus my attention on just three Linux distributions: CentOS, Debian, and Ubuntu.

I will briefly explain why there are only three distributions most people should be considering for server deployment, and I’ll also briefly mention some situations where you might want to branch out and consider other options.

So, let’s get on with it, and pick out the right Linux distribution for your new web deployment!

Lifecycle Is Really, Really, Incredibly Important

The average server remains in service for over 36 months. I have a couple of machines that have been in use for over six years without an OS upgrade! Upgrading the Operating System on a production server, even when a remote or in-place upgrade option is available, is prone to breaking existing services in unpredictable ways, or at least in ways that are difficult to predict without a very long and time-consuming audit of all of the software running on the system and how all of the pieces interact and how they will change when upgrading to newer versions.

Thus, one goal when selecting an OS for your server should be to insure you have plenty of time between mandatory upgrades. Of course, nothing stops you from upgrading earlier than you need to. If you want newer packages and have the time to perform the upgrade or to migrate to a new server before the OS reaches its end-of-life date, there is nothing stopping you. What we are more concerned about is how soon that decision will be forced on us.

With regard to lifecycle of the major Linux server distributions, CentOS (and RHEL) is, by far, the king, with a 10 year support period. Ubuntu LTS is second with a 5 year cycle. Debian is somewhat unpredictable, but always has at least a 3 year lifecycle; sometimes there may be an LTS repository that will continue support for a given version.

Non-LTS Ubuntu releases should not be considered for server usage under any circumstances, as the lifecycle of ~18 months is simply too short. Likewise, Fedora Linux should not be considered for any server deployment.

The end-of-life for current CentOS releases is as follows:

CentOS 5 Mar 31, 2017
CentOS 6 November 30, 2020
CentOS 7 June 30, 2024

For Ubuntu LTS:

Ubuntu 10.04 LTS April, 2015
Ubuntu 12.04 LTS April, 2017
Ubuntu 14.04 LTS April, 2019

For Debian:

Debian 6 (with LTS support) February, 2016
Debian 7 ~Late 2016 estimated
Lifecycle Winner
CentOS by a five year landslide. If you don’t know when you’ll have the time and inclination to upgrade your server OS or move to a new server, CentOS may be the best choice for your deployment, if the other deciding factors don’t sway you to something else. Not having to think about server upgrades until 2024 is pretty cool.

 Package Management

The reason a long lifecycle for your server operating system is is so important is that you need to be able to count on your OS to provide security updates for the useful life of your server. And, the method by which software updates, particularly security updates, are provided is vitally important. It needs to be easy, reliable, and preferably something you can automate without risk.

All of the distributions in this comparison have excellent package management tools and infrastructure. In fact, they are all so excellent that I was tempted to ignore this factor altogether. But, there are some subtle differences, particularly in the available package selection. And, if you’re considering going outside of the Big Three Linux distributions covered here, or are considering a BSD or Windows for your deployment, you should definitely consider how updates will be handled, as the picture is not nearly as pleasant on every distribution and OS, and many cannot be reliably automated.


The package manager invented for Debian and also found on Ubuntu is called apt. It is a very capable, fast, and efficient, package manager that handles dependency resolution and downloading and installing packages from both the OS-standard repositories and third-party repositories. It is easy to use, has numerous GUIs for searching and installing packages, and can be automated relatively reliably. apt installs and manages .deb packages. It is reasonably well-documented, though it has some surprising edge cases.


Yum, aka Yellow Dog Updater Modified, was initially developed for the Yellow Dog Linux distribution as the Yellow Dog Updater (a special build of Red Hat/Fedora for Macintosh hardware), and then forked and enhanced by Seth Vidal. yum installs and manages RPM packages, and is found on CentOS, Fedora, RHEL, and several other RPM-based distributions. There are both command line and GUI utilities for working with yum, and it is well-documented.

Which is better?

Choosing between package managers is difficult, as both mostly have the same basic capabilities, and both are reasonably reliable. They both have been in use for many years, and have received significant development attention, so they are quite stable. I believe you could easily find fans of both package managers, and I wouldn’t really want to argue too strongly either way.

I’ve worked extensively with both, and the only time I had a preference was when I was creating my own repositories of packages and when I needed to customize the package manager, and in both cases yum was much more hacker-friendly. Creating yum repositories is as simple as putting some files on a webserver, and running the createrepo command. Creating apt repositories is much more time-consuming, and requires learning a number of disparate tools, and creating scripts to automate management of the repositories.

Package Management Winner
yum on CentOS, by a small margin, if you plan to host your own package repositories. If you have no need for your own repos, or are already familiar with apt, either as a user or developer, it is a tie.

Package Selection

Closely related to package management is package selection. In other words, how many packages are readily available for your OS from the system standard repositories, and how new are those packages? Here, there are some interesting differences in philosophy between the various systems, and those differences may help you choose.


CentOS package selection is the smallest, by far, of these three distributions, in the standard OS repositories. In the Virtualmin repositories, we have to fill in the gaps by providing a number of what we consider packages that are core to hosting service. It is missing things like ClamAV virus scanner for email processing, ProFTPd FTP server (among the most popular and more feature-filled FTP servers available), and others. This is an annoyance which the other two distros do not make you endure. CentOS has about 6,000 packages in the standard repository.

On the other hand, CentOS has the Fedora EPEL repositories, which provide Fedora packages rebuilt for CentOS. This expands the selection of available packages on CentOS with a couple thousand extra packages. One thing to keep in mind is that EPEL is not subject to the lifecycle promises of the official CentOS repositories, and is subject to volunteer contributions to keep the packages up to date (much like Debian). Most of the popular packages are pretty well-maintained, but I have occasionally seen security updates fall behind in the EPEL repos for some packages for older versions of CentOS, which can be worrying. I generally advise selectively enabling EPEL repositories, by using the includepkgs or exclude options within the repo configuration file. In this way, you’ll know exactly which packages have come from EPEL and which ones need extra caution as time passes to insure they are kept up to date and secure.

CentOS packages in the latest release also tend to be older than those found in the latest Ubuntu release. Often this merely depends on who has had a more recent major version release, and for the moment CentOS 7 has some newer packages than the latest Ubuntu 14.04 LTS release. But, the latter also has newer versions of some important packages despite being released earlier.

CentOS is particularly strong (or weak, depending on how you look at it) about keeping the same version of packages throughout the entire lifecycle of the OS release. Thus, CentOS 7 will have Apache version 2.4.6 throughout the entire ten year life of the OS. Security updates will be applied as patches to that version of Apache, rather than adding new versions to the repository. This insures compatibility throughout the entire lifecycle, and makes it much more predictable that your server will continue to function through security updates. However, it also insures that in five years you’ll be wishing for newer versions of PHP, Ruby, Perl, Python, MySQL or MariaDB, and Apache. It is a double-edged sword and for some people the cost is too high.

In addition to the EPEL package repositories, there is also the Software Collections (SCL) repository. This repository includes updated versions of popular software, mostly programming languages and databases. There is currently SCL support for CentOS 6, but it is likely to be available for CentOS 7 as the packages found in CentOS 7 become more dated. This can allow you to continue to use an older OS version while still utilizing modern language and database versions. You can read more about the Software Collections in the CentOS Wiki.


Ubuntu, with all repositories enabled (including universe), has about 23,000 packages. As you can see, there are a lot more packages available for Ubuntu than CentOS. But, many of the less popular packages are considerably less well-maintained. Sticking to the core repositories (main and security) may be advisable, in the same way that avoiding general use of EPEL on CentOS is advised. It’s best to know your packages are being well-cared for and that lots of other people are using those packages, so bugs are found quickly.

In our Virtualmin repositories for Ubuntu, we don’t have to maintain any binary packages aside from our own programs, which is indicative of how well-equipped the standard Ubuntu repositories are for web hosting deployments. It is possible to install nearly anything you could want or need, and in a relatively recent version, on the latest Ubuntu release. Ubuntu is also less strict about keeping the same version, and more likely to provide multiple versions, of common packages, like Apache, PHP, and MySQL or MariaDB. This makes Ubuntu a favorite among developers who like to stay on the bleeding edge of web development tools like PHP, Ruby On Rails, Perl Dancer, Python Django, etc.

In short, Ubuntu has far more packages and generally more recent packages, than CentOS. Ubuntu usually has more recent packages than Debian stable releases, as well, and a better update policy in terms of stability. Ubuntu’s update policy is not a strict or predictable as that of CentOS, but it is unlikely you will run into compatibility problems between minor version changes that can happen on Ubuntu with some of the core hosting software.


Debian has the most packages in its standard repositories, with something along the lines of 23,000 packages. The popular packages tend to be well-maintained by a veritable army of volunteers and using excellent infrastructure to assure quality. However, many of the packages will be quite old, at any given time. And there is less assurance of compatibility between updates in Debian than in CentOS, or even the core Ubuntu repositories.

Given Debian’s short lifecycle vs CentOS, and Ubuntu’s ability to tap into the universe repository for access to roughly the same number and quality of packages as Debian, it is hard to argue that Debian leads in this category, even though historically its huge selection of packages was hard to beat. Debian’s stable release also tends to have somewhat older packages, even in the beginning of its lifecycle, which can be a negative for some deployments.

Package Selection Winner
Ubuntu, if sheer number and newness of packages is most important. Or, possibly CentOS, by a small margin, if you prefer stability over newness, and prefer to insure your software never stops working due to incompatible changes in software running on the system.


I recommend not upgrading servers to entirely new versions of the OS frequently, generally speaking, since it can be time-consuming and it can introduce subtle malfunctions that can be hard to identify and fix. If you do need to upgrade, a valuable feature is the ability to upgrade without physical access to the system. This can be somewhat nerve-wracking, for servers you don’t have easy hands-on access to, but some distributions are better at it than others.

Debian and Ubuntu

apt has long been an accepted method of performing an OS upgrade on Debian, since long before Ubuntu even existed. The apt-get dist-upgrade command will handle not just dependency resolution, but it will also handle packages that have been made obsolete by newer packages or situations where various libraries have moved to new packages. This allows a system to be upgraded to a new version with very little disruption, and because it has been in use for many years, it is generally pretty reliable and a well-supported method of upgrading the system.

The process of upgrading Debian or Ubuntu using apt is quite similar, though in my experience Debian upgrades are historically smoother than Ubuntu upgrades, for a variety of reasons, but mostly because of the more conservative nature of Debian development, and the fact that more Debian users are in various states of running newer and older software together (mixing and matching of repositories on Debian is more commonly done to get newer packages, and for development purposes), so community testing of various package versions within each system version is broader, if not deeper. This is a historic difference, based on my own experiences with Debian and Ubuntu upgrade, and may be alleviated by the much larger number of Ubuntu users today.

The important thing here, however, is that upgrades on Debian or Ubuntu are a relatively painless affair, at least when compared to CentOS.


Upgrading a CentOS system is more cumbersome. While it is possible to perform an OS upgrade with yum, it is not currently recommended or supported by the CentOS developers, so remote upgrades are very challenging. In fact, there isn’t even a very clear path for upgrading from CentOS 6 to CentOS 7 while sitting at the console. There are new tools in development for handling OS upgrades using yum, fedup on Fedora and redhat-upgrade-tool on RHEL/CentOS, which will likely eventually provide a reasonable upgrade process. Though, I have never seen an upgrade using this process work without significant manual correction of issues after the upgrade process completes. I would not trust this method to upgrade a remote system, unless I had KVM access, and remote hands available in the data center to handle inserting a rescue CD should it come to that.

In short, CentOS should be considered a “cannot upgrade” OS for servers in remote locations. The only tools for performing remote upgrades are very early alpha quality at best and are not recommended by their developers for production systems.

Upgrade Winner
Debian, because of its long history of users upgrading via apt and its ideology of mixing and matching packages from various repositories, relying on the dependency metadata of the packages to allow them to reliably interoperate. Ubuntu provides a reasonable upgrade path using the same mechanism, and is a very close second. CentOS isn’t even in the game, and cannot be upgraded remotely via any reasonable mechanism.


Ordinarily, I don’t recommend looking to popularity as a major deciding factor in choosing software, though for a variety of reasons, it does make sense to choose tools that are used by a reasonably large community. This is especially true for Open Source software.

Popular software will have more people using it, more people asking and answering questions about it online, and more people who are experts or at least comfortable working with it. This insures you can get help when you need it, you’ll be able to find plentiful documentation, and you’ll be able to hire people with expertise if you get stuck in a situation that’s over your head.

On this front, things have shifted quite a bit in the past several years. CentOS once ruled the web server market, with a huge market share advantage. Among our many thousand Virtualmin installations, CentOS accounted for approximately 85%. Today, CentOS is still the most popular web server OS, with about 50% market share (depending on who you ask and which specific niche you’re talking about, this may vary quite a bit), with Ubuntu following closely behind with 30% (and in some niches it may even hold a larger share than CentOS), and Debian following behind with about 15%.

For the majority of users, any of these three systems has achieved the minimum level of popularity necessary to insure you have a large and vibrant community of developers, users, authors, and freelancers, available to make the system work well in a wide variety of use-cases. I would not hesitate to recommend any of these systems, but would caution going outside of these three systems, because the user base of everything else is so very small.

Popularity Winner
CentOS, but it probably doesn’t matter all that much. With a 50% market share, you’re most likely to find the help you need when problems or question arise. But, Ubuntu and Debian also have very large and active communities, and you’re likely to find all the help and documentation you need for any of them.

Your Experience Level

This one won’t have a winner that I can choose for you, and simply has to be decided based on your own experience level. And, it may even be the most important single factor. If you are an expert on one distribution, but a novice on the others, you would almost certainly want to choose the one you know over the ones you don’t (unless others on your team have different expertise).

If you use Ubuntu on your desktop or laptop machine, you may find that using an Ubuntu LTS release on the server provides the least friction; you can develop in roughly the same environment you’ll be deploying into. Likewise, if you are a Fedora user on the desktop, CentOS is an obvious choice, because they share the same philosophy, package manager, and many of the same packages (Fedora can be seen as the rapidly moving development version of CentOS, and most packages and policies that find their way into CentOS began by being introduced into Fedora a year or more before).

Of course, if you have no strong existing preference, it would be wise to consider your needs for your systems and compare the other factors in this article.

Experience Winner
You! You get to choose from some of the most amazing feats of software engineering ever to exist, representing millions of person-hours of development, and they’re all free and Open Source. We live in amazing times, don’t we?

Some Final Thoughts

If you’ve made it this far, congratulations! You now know I like all three of the most popular web server Linux distributions quite a bit, and think you will probably be pretty happy with any of them. You also know that CentOS is possibly the “safest” choice for new users, by virtue of being so popular on servers, but that Ubuntu is also a fine choice, especially if you use Ubuntu on the desktop.

But, let’s talk about the other distributions out there for a moment. There are some excellent, but less popular distributions, some of them even have a reasonable life cycle and a good package manager with good package selection and upgrade process. I won’t start naming them here, as the list could grow quite long. I do think that if you have a Linux distribution that you are extremely fond of, and more importantly, extremely familiar with, and the rest of your team shares that enthusiasm and experience, you may be best off choosing what you know, as long as you do the research and make sure the lifecycle is reasonable (three years is a little short, but most folks would be OK with a 5 year lifecycle, especially if upgrading is reasonably painless).

There are also a variety of special purpose distributions out there that may play a role in your deployment, if your server’s purpose matches that of the distribution. Some good examples of this include CoreOS or Boot2docker, which are very small distributions designed just for launching Docker containers, and those containers would include a more standard Linux distribution. Those are outside of the scope of this particular article, but I’ll talk more about them in a future post.

And, if you’ll be installing the Virtualmin control panel on the system (and I think you should, because it’s the most powerful Open Source control panel and also has a well-supported commercial version), you’ll want to make sure it’s one of our Grade A Supported operating systems.

Virtualmin Memory Usage (and Other Tales of Wonder and Woe!)

I’ve noticed over the years that one of the most common sources of confusion for new Virtualmin users, or users who are new to Linux and web hosting in general, is memory usage. I’ve written up documentation about Virtualmin on Low Memory Systems in the past, but it focuses mostly on helping folks with low memory systems reduce memory usage of their Virtualmin (and all of its related packages, like Apache, PHP, MySQL, and Postfix) installation. It goes into interesting detail about Webmin memory usage, library caching in Virtualmin, etc. but doesn’t go into things like the memory usage of various services in a Virtualmin (or any LAMP stack) system. This article will briefly address each of these subjects and provide real world numbers for how much memory one should expect a Virtualmin installation to require.

A side story in all of this is how Virtualmin compares to other web hosting control panels. Somehow, this is considered interesting data for some folks, though I can’t really fathom why, given the huge differences in functionality available, particularly when comparing control panels with extremely limited web-focused functionality with full-featured control panels (like Virtualmin, cPanel, or Plesk) that provide mail processing with anti-virus and spam filtering, database management, etc.  But, it comes up a lot. So, let’s get some hard numbers for Virtualmin and talk about where those numbers come from. If anyone happens to have data about memory usage of other control panels, feel free to post them in the comments (though, I doubt any control panel will use vastly more or less memory than Virtualmin, unless it’s written in Java, or something similar).

Where does the memory go‽

The first thing I want to do is break down memory usage in a production Virtualmin system, and talk about which components require large amounts of memory, and which ones can be reduced through options or disabling features.

Virtualmin system top

top sorted by memory usage on a very busy 8GB server

The above image is the output of the top command on a Virtualmin system that has several active websites, including a large Drupal deployment (the one for which has ~30,000 registered users, ~100,000 posts and comments, and receives about 100,000 visitors a month, at time of writing) and all of our software download repositories. As you can see the system has 8GB of RAM and 2GB of swap memory. Here’s what we see is using the majority of memory on this system, in order of size:

  • mysqld – This is the MySQL server process. It is configured with quite large buffers and cache settings, in order to maximize performance for our Drupal instance and other applications that access the database, such as the Virtualmin Technical Support module (which can create tickets in our issue tracker). This is the largest single process on the system, which is likely to be true on most systems with large database-backed websites. It has 2.3GB of memory allocated, though all but 418MB is not necessarily dedicated to this process or in physical RAM. See the note below about virtual vs. resident size.
  • clamd – This one always surprises people, and folks often forget about it when calculating their expected memory usage. ClamAV is very demanding, because it loads a large database of virus signatures into memory. Virtualmin allows it to be configured as either a daemon or a standalone executable…but the standalone version is extremely slow to start, and causes a spike of both CPU and disk activity when starting. So, if you plan to process mail (on any system, regardless of whether Virtualmin is involved), you should expect to give up a few hundred megabytes to the spam and AV filtering stack. The ClamAV server has 305MB resident size.
  • php-cgi – There are several of these, and they represent the pool of mod_fcgid spawned PHP processes that are serving the Drupal website. They are owned by user “virtualmin”, because we use suexec on this system, and the site in question is, and the username for that account is virtualmin. The PHP process is quite large here, larger than most, for a few reasons. Primarily, it is because we make use of a large number of Drupal modules, and some of those modules are quite demanding, so we’ve had to increase PHP memory limits for this user. These processes have ~135MB resident size, and much larger virtual size, but all of the virtual memory usage is shared across every php-cgi process for every user.
  • – This is part of the mail processing stack, and is a server provided by Virtualmin. It allows SpamAssassin and ClamAV to have user-level and domain-level configuration settings, and allows some types of configuration for these services to be modified by end users safely. This process is 55MB with another ~40MB shared with other processes.
  • spamd – The SpamAssassin server. See, I told you mail processing was heavy! At ~50MB for each of the SpamAssassin child processes, this adds up on a heavily loaded system.
  • perl – Finally, this is actually the Webmin/Virtualmin process! My system currently has library caching fully enabled, and the total virtual process size is ~135MB (this would be smaller on a 32 bit system), and a resident size of 46M. If I were on a low-memory system, I would disable pre-caching, and Virtualmin would shrink to about 15MB (less on a 32 bit system). This can be set in Virtualmin->System Settings->Preload Virtualmin libraries at startup? The options are “All Libraries”, “Only Core”, and “No”, which will cause the Webmin process to be 40-45MB resident, 20-25MB resident, or 12-17MB resident,depending on whether the system is 32 or 64 bit.
  • named – This is the BIND DNS server. It’s memory usage is quite modest compared to a lot of the other services on this system, and is probably never something one would worry about tuning, unless you serve a very high volume of DNS requests.One thing to bear in mind, however, is that if you have enabled the caching nameserver features of BIND, and many users are using it for DNS service, the process size could grow quite large. We recommend only enabling recursive lookups for the Virtualmin server itself (or, possibly even better, forwarding those recursive lookups to another server).
  • httpd – This is the pool of Apache web server processes. Notice the virtual size is quite large, while the resident size is quite small. Much of the memory usage of these processes is shared across all of them (of which there are probably 100+ on my system at any given time, due to number of concurrent users). The size of these processes is determined mostly by the number of modules you have installed. But, even on this system, with a number of modules enabled and actively used, the resident size is only 9MB per process. Given my 3.4 GB of currently free memory, Apache could spawn over 300 additional processes (beyond the 100 or more already running) without bumping into the memory limitations of this system. Apache often gets accused of being a memory hog compared to other web servers, but that’s often an unfair comparison between an Apache with a bunch of large modules (like mod_php, or mod_perl, neither of which are needed for most Virtualmin systems) and a stripped down lightweight server, like nginx that simply doesn’t have any large modules that can be enabled.

Note: VIRT and RES are indicators of the type of memory that has been allocated; VIRT includes the resident memory, as well as memory-mapped files on disk, shared libraries which share RAM with other processes, etc., while RES is the resident memory usage, which roughly reflects how mush RAM is dedicated to this process.

There are many other processes on this system, including the rest of the httpd processes, but these few processes already explain where the vast majority of memory on the system is going, and so we won’t dig any deeper into it for this story.

Just for fun, let’s see a somewhat smaller system’s memory usage:


Memory-sorted top output on a moderately loaded 4GB Virtualmin server

This is a ~4GB virtual machine, and I’ve temporarily disabled library pre-caching in Virtualmin, which makes the process size about 17MB (it is a 64 bit system). Since it’s so small, it doesn’t even show up in the list, when sorted by memory usage. In this case, the large processes start with MySQL, once again configured with somewhat large buffers and caches. Java shows up here, which is uncommon for me, since Java is such a best to work with, but I have a Jenkins CI instance running on this box. And, then the mail filtering stack is next, and slightly smaller than on the above system. I don’t have ClamAV running on this box, since the only email it processes is received by people running Linux and we don’t worry so much about viruses in email. And, then comes php-cgi, which is much smaller on this system, since it only runs moderately small WordPress instances, and a pretty hard-working MediaWiki installation for

It’s also possible to run Virtuamin in a very small amount of memory, particularly if you don’t need to process mail on the system. We recommend at least 256MB for a 32 bit system, and 384MB for a 64 bit system, even if you won’t be running a mail stack. While Virtualmin itself doesn’t need more memory, the performance of most web applications would be pretty abysmal on anything less. MySQL performance is directly correlated with the amount of memory you can devote to it. Using nginx (which is also supported by Virtualmin) may help in reducing the needed memory usage, though a minimal Apache configuration won’t be much larger.


Virtualmin uses somewhere between 12MB and 46MB resident memory, and up to ~150MB virtual, depending on whether library caching is enabled and whether it is a 32 or 64 bit system.

If you’re processing mail with spam and antivirus, Virtualmin will, by default, also run a 45-55MB daemon for assisting with processing mail.

All of this is dwarfed by the actual services being managed by Virtualmin, like Apache, MySQL, ClamAV, SpamAssassin, Postfix,etc.

If you need to run Virtualmin on a very small-memory system, the best thing you can do is off-load email to some other system or service, since the full mail processing stack with SpamAssassin, ClamAV, Postfix, and Dovecot can easily add up to a few hundred MB.

Interesting Links

My favorite site to refer people to when they’re wondering about what memory usage information means on a Linux system is Linux Ate My RAM!