Tag Archives: ubuntu

Is Ubuntu a good server OS? – firewall edition

Earlier, I posted about upstart and how it illustrated how Ubuntu is not a good operating system for running servers. Today I wanted to provide another example: UFW, the “Uncomplicated FireWall” that is installed by default on Ubuntu servers.

Linux firewalling and UFW

Firewalls in Linux are implemented with iptables. In a nutshell, iptables evaluates packets against lists of rules to determine whether they should be accepted or rejected. iptables is typically used on servers through scripts that set up all these rules the way you want them – these days this process is generally managed through whatever configuration management system you’re using (e.g. chef or puppet).  UFW is a tool distributed with Ubuntu that provides a different way of setting up these iptables rules, using command line tools.

How UFW works

UFW keeps its own set of iptables rules in /etc/ufw. When ufw is turned on, it flushes out all the existing iptables rules and replaces them with the rules from /etc/ufw. You can then use the command line tools to add ports to allow or deny, which will update both the running iptables rules as well as the copies stored in /etc/ufw.

Note that all UFW works with is what’s in /etc/ufw – it doesn’t know or care what the running iptables rules are.

A Totally Theoretical Example

Let’s pretend you have a big distributed system where the nodes are all running Ubuntu. It’s a big system, and it’s part of an even larger IT environment, so there’s a large IT support organization that’s staffed with people who were hired because they have fair bit of experience with Linux systems – but some of those folks have more at-home, desktop Linux experience vs datacenter, server Linux experience. Also these folks don’t know the ins and outs of the design of this particular distributed system because they are responsible for all of the varied IT environments at this large organization. The hero in our story is one of these selfless and courageous sysadmins. A report comes in from the users of this distributed system that they’re having trouble reaching some of the resources it provides. Initial troubleshooting leads our hero to posit that the problem is a misconfigured firewall. His experience is with Ubuntu and he’s always used ufw, so his first step is to disable the firewall and see if that helps:

ufw disable

There’s no change, so he figures that this is unrelated, so he turns the firewall back on:

ufw enable

Now he moves on to further troubleshooting.

Here’s the problem: The ‘ufw disable’ command above didn’t actually do anything, as ufw wasn’t enabled to begin with. No harm, no foul. However, running ‘ufw enable’ turns on ufw, and configures it in default mode – which denies all incoming connections. As the server in question provides many key network services for this distributed system, suddenly all of the other nodes in this system cannot reach these central services, and the entire distributed system starts to fall over!

So what went wrong here?

UFW is not a front-end for iptables

It may seem like UFW is a front-end for iptables – because you run ufw commands and it makes changes to iptables rules. But what it’s really doing is throwing away whatever is currently in iptables, and replacing it with what’s been configured in ufw. A real front-end would allow you to inspect and modify iptables stuff that already exists.

How this plays into this situation is that lots of carefully-crafted iptables rules for access, masquerading, etc got silently nuked when the ‘ufw enable’ command was run. It does not take into account the starting state of the system before it does stuff. That’s bad on a server.

UFW’s defaults are appropriate for desktops, not servers

I love ‘deny all’ as a default for firewalls. I could even concede that ‘deny all’ is the best default for servers, although I think that could be debated. However, let’s look at some of the specifics that ufw’s defaults do allow:

  • ICMP (this is OK)
  • multicast DNS (hmmm….)
  • UPnP (!!)

OK, this should go without saying, but does it seem to you like your server operating system should enable UPnP discovery by default? Makes sense for a desktop (maybe) – but not for a server. No way, no how.

UFW is an Ubuntu-specific thing (and makes iptables complicated)

If you have a ‘firewall administrator’ on staff, and you ask them to look at a server, how likely is it that they’re going to know anything about UFW? IMO, very unlikely. They’re probably familiar with iptables, and the first thing they’ll do is look at iptables directly, and they’re going to see dozens of rules that point at other rules and they’re going to be very confused. At best, they’ll refuse to touch it and give you a big lecture about how if packets get to your server’s TCP/IP stack before hitting your firewall, you’re doing it wrong. At worst, they’ll start making changes which due to the complexity of these rules will likely result in more problems, not less.

Make your Ubuntu servers better

If you are going to run servers on Ubuntu, here’s one simple suggestion for how to make them better:

apt-get remove ufw

After this, at least your beleaguered sysadmins will have one less possible way to screw up your systems while doing their jobs.


Is Ubuntu a good server OS?

My “Openstack IRL” presentation informs the audience that we at Cloudscaling use Ubuntu for the systems we deploy. When I present this live and we get to that slide, I usually say something like this:

We use Ubuntu for our systems. This is somewhat ironic because at least once a month in our office we have a big discussion about how terrible Ubuntu is as a server operating system…

Funny. But is it true? Is Ubuntu a terrible operating system for servers? Let’s look at one data point: Upstart.

Upstart’s raisons d’etre

My distillation of the reasons upstart was created to replace the traditional rcX.d init structure from SysV and BSD is:

  1. the traditional system is serial rather than parallel, meaning reboots take longer than they have to – and people reboot their systems a lot more these days
  2. the traditional system doesn’t deal well with new hardware being plugged in and out on a regular basis – and people are constantly plugging stuff in and out of their systems these days

Do those sound like conditions that affect your servers? Me neither. They are desktop-centric concerns. And there’s nothing wrong with that – unless you’re trying to run a server.

Why does it matter?

From the perspective of a crotchety old-time unix sysadmin (a hypothetical one of course!), upstart is a PITA. Let me try to illustrate why:

Checking what order stuff starts in

In the traditional world, here’s what you have to do to find out what order things start in:

ls /etc/rc2.d

That’s it. The output of that command provides you with everything you need to see at a glance that (for instance) apache is going to start after syslog.

Here’s how you do it in the upstart world:

well I wish I cold give you a simple way to do that, but you can’t. You have to open up the conf file for the service you’re interested in in /etc/init and look at what events it depends on for starting. If one of those events is the startup of another service, then you know your service will start after it. However, if there is no dependency listed on another service, then you don’t know what order they will startup in. Yours might start after the other one, or the other one might start before yours does, or they may both be starting up at the same time. You don’t know, and it isn’t guaranteed that they will start in the same order every time the system boots. This makes crotchety old unix sysadmins nervous, and leads to the second point….

Defining what order stuff starts in

In the traditional world, this is done with 2 digit numbers. You have a 2 digit number (part of the name of the file in /etc/rcX.d) and the scripts are run in the order of the numbers in their filename. So if you want one script to start later than another, just change its number to be larger than that other one. Easy to understand, and all you have to know to do it is how to use mv. And there are no hard dependencies here – if you build one server that doesn’t contain a particular service, that init file won’t be installed, and none of the other init files will be affected and startup will go as you expect.

In the upstart world, you do this by specifying startup dependencies between the jobs that start services. Each job emits an event when it completes that you can use in the conf files for other services. So say you have two services, s1 and s2, and you want to be sure s2 starts after s1. You do this by putting a line like this into /etc/init/s2.conf:

start on started s1

So aside from the crochety old sysadmin spending 45 minutes perusing ‘man upstart’ to figure this out in the first place, the problem you run into here is with distributed systems that can be deployed in varied configurations. For example, sometimes s1 and s2 are on the same node, and sometimes they are on different nodes. If you put the above line into /etc/init/s2.conf by default, guess what happens if you deploy in a config where s1 isn’t on the same node? s2 will never ever ever start.

Summary

My take on this is that upstart is a great thing for desktop systems. For server systems, it’s adding a bunch of complexity and brittleness without providing any actual benefits. And it’s one check mark in the “ubuntu isn’t a good OS for servers” category.


Getting Ubuntu to send through Gmail for Nagios

Recently I set up a Nagios installation running in a VM on my laptop (Virtualbox on a MacBook Pro).  This was a quick and easy way for me to get some monitoring going without needing a real server somewhere.  (I know, I know – “Use the cloud dumbass!”  It’s complicated).

Anyway, the trickiest part was getting the notifications to actually go to my gmail.com account. Gmail doesn’t accept regular old SMTP connections.  You need to use authentication over TLS (or SSL).  I didn’t know how to set this up, but I found this great page that explains most of it:

Mark Sanborn’s guide to getting postfix to send through Gmail

This guide was written several years ago and while it almost worked for Ubuntu 10.10, two changes were required.

The first change was to use the new Equifax CA cert because the one that Gmail is currently using isn’t part of the postfix config in 10.10, although the right cert is already on the system. Fixing this is very straightforward:

cat /etc/ssl/certs/Equifax_Secure_CA.pem >> /etc/postfix/cacert.pem

The second change is to tell the mail server to actually use the transports file that the guide has you set up. This must have already been there in 2007 but wasn’t there for me. This was a one-line change to /etc/postfix/main.cf:


root@gunther-VirtualBox:/etc/postfix# rcsdiff main.cf
===================================================================
RCS file: RCS/main.cf,v
retrieving revision 1.1
diff -r1.1 main.cf
19a20,22
> # use a transport file to send stuff to google for gmail.com
> transport_maps = hash:/etc/postfix/transport
>

(Yeah I still use RCS when I’m modifying config files. Try it!
apt-get install rcs ; mkdir RCS ; ci -l file
You’ll love it).

By following the instructions at the link and adding these two steps, my VM was sending email to my gmail.com account and everybody was happy.