I like cars. And I love finding analogies for my day job in the car world. Cars have been around for a long time and they’ve gone through a bunch of technological advances. I happen to have two cars from two different eras that demonstrate many of the changes in automotive technology over the last 40 years. I believe that these changes can teach us something about the evolution of Web Ops technologies as well.
One of my cars is a recent Mustang, a 2006. It’s a modern car, full of technology and creature comforts. I also have a 1966 Mustang, which is devoid of both. I just recently started driving the 1966 again (it was non-functional for many years), and I have been really struck by how different the two driving experiences are.
What first got me thinking is when I noticed the vast difference related to the manual transmissions, and specifically the clutches. These two cars are very similar at first glance: both rear-wheel drive, both powerful V8 engines, both manual transmissions. But they don’t drive the same at all. In the 66, all of the user controls (steering wheel, clutch pedal, gas pedal, etc) are mechanically linked to the things they’re controlling – if you press the gas pedal to a certain point, that corresponds exactly to how far the throttle plates in the carburetor will be open. In the 06, everything is computer controlled. The position of the gas pedal is sent to the computer which uses that as one of the inputs to decide how much fuel to inject into the mixture going into the cylinders. Meaning that in the 06, the computer can adjust things if it wants to – and it often wants to. You can see this in how forgiving the 06 is when starting from a stop. As everyone who has driven a manual car knows, from a stop you give the car some throttle to bring up engine speed while simultaneously releasing the clutch pedal, which brings the clutch plate into contact with the flywheel at the back of the engine, starting the clutch plate spinning, spinning the transmission gears, then the driveshaft, then the differential gears, and finally the rear wheels, making the car go. In the 66 it’s very easy to get a result that’s out of expected norms – namely either stalling the car if you don’t give enough throttle or spinning the rear wheels if you give too much. It is much, much harder to stall the 06, because the computer knows that’s probably not what you’re trying to do – so it’ll put more fuel into the mixture than the position of the throttle would normally indicate to keep the engine from stalling (up to a limit). This makes the general driving experience in the 06 more pleasant, more predictable, and safer.
Great, right? That’s progress! Cars should be pleasant, predictable, and safe, shouldn’t they? Yes they should – usually. But not always. What if you’re on a racetrack, trying to get the maximum possible performance out of your car? In those cases, the same technology that makes your car more pleasant in average driving conditions will hold you back. You can see this much more concretely in traction control systems. Briefly, many modern cars (especially with lots of power) have computerized traction control systems that can sense when a wheel starts to spin (meaning you’re losing traction) and they either use the braking system to slow the rotation of that wheel or lower the power output from the engine. In average driving conditions, this is great – it makes the car safer and more predictable. However, if you’ve ever watched Top Gear or any show where they have professional race drivers trying to get the most performance out of a road car, the very first thing they will do is to disable the traction control systems – because at the edges of the performance envelope those systems will hold you back and prevent you from getting the full potential out of your car. You need to be able to drive the car so that one or more of the tires are on the ragged edge of traction to go FAST. When you’re on the edge of out of control is when you’re racing – and (so far) traction control systems are not nearly as good at balancing on that edge as a human driver.
What does this mean for Web Ops?
- When you make things easier, more predictable, and safer, you by necessity impose constraints on what can be done with the system. For most people most of the time, these are very good constraints. However, if you are trying to push the edges of the envelope when it comes to performance or efficiency (like a race driver does), then those constraints will get in your way and you’ll want to avoid or remove them. For example, using standardized machine configurations that are automatically applied can greatly increase the robustness of the system, both by removing manual mistakes and also by simplifying diagnostic and repair procedures. However, those standardized configurations are probably not going to be as performant as a custom-built, custom-tuned configuration would be for any particular application. If you’re in the 99% that don’t need maximum performance for your particular business, standardized configurations are going to be great for you. But if you’re that 1% you may need the flexibility to customize.
- Another effect of using technology to make things easier, more predictable, and safer is that it usually imposes new costs on the system in terms of resources. Take virtualization as an example. Virtualization provides powerful new capabilities for IT – you can easily reconfigure your infrastructure, snapshot your systems, etc. However, there are costs to virtualization – the virtualization layer consumes system resources that can’t be used directly for your workloads. For most people most of the time, virtualization is overall a large net benefit. However, if you’re running a huge infrastructure where losing 0.5% of your resources represents a significant financial burden, then virtualization probably isn’t for you.
- The costs imposed by new technology to make things easier, more predictable, and safer aren’t always measured in resources. They can come in the form of risks as well. Computer-controlled throttle and braking systems could in theory suffer from software bugs that could create dangerous conditions in ways that mechanical systems are not vulnerable to, like having the throttle opened even when the driver is intending to slow the vehicle down. New capabilities and new software introduce new failure modes. IT is littered with similar examples – in my own career I was struck by how often the introduction of high-availability systems (like database clustering for instance) actually led to failures of the highly-available system that wouldn’t have occurred in non-redundant systems – such as when a heartbeat between a master/slave system fails and the slave puts itself in-service, leading to two masters being active, leading to IP address conflicts and general badness.
Don’t get me wrong – I love technology, in cars and in IT. I hope that everyone leverages technological advances to the fullest extent possible. However, I recognize that as technology gives us new capabilities, it also imposes new constraints and costs, and as Web Ops professionals we need to understand those costs and constraints and make sure that we make informed and intelligent decisions about how we apply new technologies to our infrastructures.