DevOps and Formula 1 – Automation

Formula 1 racing and DevOps. Two things that I love. At first glance, you might not think they have anything in common. But they do! Both of them are about maximizing the throughput of your systems through relentless focus on improving performance and reliability.

Observation Tower at Circuit of the Americas during the 2016 F1 race

How did I start thinking about this? At the Bahrain race this year, Ferrari had a horrific accident in the pit lane that seriously injured one of their mechanics. The car started driving away while the mechanic was still standing in front of the rear tire. These cars accelerate fast and in a split second the moving tire hit the mechanic’s leg and broke it in multiple places. Of particular interest to me, there’s an automation backstory to this horrible event. In the olden days, releasing the car from a pit stop was the responsibility of a human (affectionately known as the “lollipop man”). But in the last few years there’s been a switch to an automated system that checks various things before it will turn the light green and thus signaling the driver to go. In this case, it checked for all the things it’s supposed to check for and turned the light green – but the mechanic was still standing in front of the tire. (Obviously that’s not one of the things it checks for.) While the details aren’t 100% clear, it appears that whatever sequence of events happened was not accounted for in the design of the automation (i.e. an edge case). In this case, by taking the human out of the decision making loop, this particular implementation of automation led to a tragic outcome.

Obviously the link to DevOps is automation. Automation is essential to success in today’s world of computing infrastructure. But it’s not a panacea and can lead to bad outcomes, typically in unaccounted-for edge cases (just like the pit stop). A classic example is automation for simple self-healing. Say you run a bunch of containers and sometimes they get in a bad state and need to be destroyed and new ones spun up. In the olden days some human would keep an eye on the fleet and kill/restart things manually when needed. Automation lets you remove the toil from that human by setting up a simple liveness monitor for your container and when that monitor fails, you have software automatically kill/restart it. That works great – most of the time. But you might experience pathological behavior when some downstream component has a hiccough that causes your liveness probes to all return failures for a brief period, leading them all to be killed and restarted, which leads to a service outage.

Automation is a powerful tool, in both the F1 and the DevOps context. The key is to figure out what should be automated, and what shouldn’t. You want to remove as much of the toil from humans as possible, but in places where key decisions need to be made – well that’s what humans are there for. Is releasing a car from a pit stop that kind of key decision? That’s arguable, but I’d vote yes. Is restarting a container a key decision? Absolutely not. Is restarting all your containers simultaneously a key decision? Yes! Make sure your automation is clear on what is and isn’t a key decision, and treats them appropriately!

Advertisements

Your Nines Are A Lie

Is your service three nines, four nines, or even five? No matter what your answer is, it’s almost surely inaccurate.

I recently went through an exercise at work to calculate the expected availability of one of our foundational systems. It reminded me how little these calculations have to do with actual availability as experienced by consumers. Expected availability numbers are generally based on hardware failure rates. You take how often it fails with how long it takes to repair, and that gives you the component availability. An individual server may have an expected availability number of 99% which means in an average year you’d expect it to be down for repairs for about three and a half days. An easy way to raise the availability of a system is to have redundant components – if you have two of those servers, your system availability goes up to 99.99%. Why? Because the chances of both servers failing at the same time are really small. With three servers you get up to 99.9999%. As you make this system more complex with more layers and more dependencies, the math gets a little more complicated but the idea stays the same, and so you can calculate an expected availability of your entire system based on the availability of each of its components. If you’re running a production system at scale a typical design (redundant data centers, redundant circuits, redundant systems) could easily reach 99.999% (five nines) on paper. That’s about 5 minutes of downtime per year. For calibration, it would take 12 years of uninterrupted service to be able to take a 1 hour outage and still be at five nines. But every big outfit, including Google, AWS, and Facebook has experienced outages longer than that, even though they have big budgets and super smart people designing their systems. Why?

It turns out that most big outages are not caused by component failures. The most common cause of a major outage is someone making a change. All three of the outages I linked to above were caused by a human making a change. Reliability calculations based on component failures tell you absolutely zero about how likely your system is to fail when you make changes – that depends on the quality of your tooling, the frequency of your changes, the design of your system, and the capabilities and training of your team. The second most common cause of outages is overloads – where your system (or some critical subsystem) can’t keep up with what’s being sent at it. Two of the three examples involved overload conditions.

I’ve seen a lot of outages in my career and a vanishingly small percentage were caused by hardware failures – pretty much any decent system these days has been designed to handle individual component failures. The trick is figuring out how to make your system resilient in the face of change and making sure you have the tooling you need to be able to react to and quickly fix any problems that do come up (including being able to quickly add new capacity if needed). If you’re trying to build a reliable service you should pay just as much attention to those as you do to the reliability of your system components!


Hosting simple webapps for free with Github Pages

I wanted to put up some simple webapps. In the past, I’ve always had an Internet-connected server handy for such a purpose – either in a spare room in my house or a cheap VPS (I miss you unixshell!) But I don’t anymore. So where to put these webapps? Even a t2.nano is $40 a month! Turns out there is a great solution: GitHub Pages. What makes this great?

  • The code is already in GitHub anyway
  • It’s super simple to turn your repository into a hosted webapp – just go into the repository settings, scroll to the “GitHub Pages” section and select a branch to serve from. Now the index.html in your repository will be loaded when someone goes to a URL like https://you.github.io/your-repo/

And BOOM now you have free hosting of your webapp (if all the logic is client-side).

For an example, check out my simulator for evaluating different toilet-seat strategies. (code)

Or my estate tax calculator. (code)

Both are simple webapps where all the logic is implemented in client-side javascript. And hosted for free! Thank you GitHub!


Are you SRE or are you DevOps?

People have asked me, “Are we doing DevOps, or are we doing SRE?” I’ve also heard (and this is worse): “We’re an SRE team – we don’t do DevOps.” These distinctions don’t make sense, because SRE and DevOps aren’t actually different things. SRE is DevOps. To be more precise, SRE is a specific implementation of DevOps. DevOps is a broad (and vague) term. It’s more of a philosophy than a methodology – it’s a perspective on the world and a set of patterns to apply. SRE shares the DevOps philosophy and many of the same patterns.

The term “SRE” generally refers to Google SRE, which is a particular implementation of DevOps inside of a ton of Google-specific context. (SRE – both the term and the practice – originated at Google and has only recently been used by other organizations). There are several things DevOps and SRE have in common:

  • Focus on solving problems with software
  • Ownership and empowerment of the team responsible for a service
  • Learning relentlessly from successes and (especially) failures
  • Driven by data and metrics

Google SRE adds a lot of specifics – some of the most interesting are aspects of economics and incentives, such as:

  • Common (human) resource pool between software developers and SREs – scarcity of whom leads to explicit decisions to optimize between features and reliability
  • Use of an “error budget” to throttle the rate of change for a product – including the unintuitive guidance that if you are exceeding your SLO for availability, you should launch more features
  • A cap of 50% of SRE time on operational tasks (known as “toil”) – to ensure the system can scale faster than the team required to support it
  • At least 5% of operational work done by software developers – to maintain visibility of the operational load the software creates

Google SRE operates on top of Google’s internal infrastructure and products. This is an extremely important part of the Google SRE context – they have had brilliant people working for fifteen years on the foundational systems, processes, and tools used to manage Google’s services.And within Google, every SRE team can benefit from not just common tooling and infrastructure, but also repeatable, translatable process. No other existing SRE team (outside of Google) works with the same level of foundational support.

As SRE expands outside the walls of Google, I like to think it will come to mean “applying the principles of DevOps at scale.” “Service Reliability Engineering” (an evolution of Google’s “Site Reliability Engineering”) is a much better term than “DevOps” to apply to teams focused on the reliability and performance of large-scale distributed systems, because it reflects the work and the expertise involved. “DevOps,” unfortunately, tends to just create confusion when applied to an organization or a strategy.

What, then, does it mean to do SRE? What does applying DevOps at scale look like? To start with:

  • Automate your infrastructure – build and management
  • Monitor what matters – set explicit SLOs for your services and gather the data both to see if you’re hitting the objective, as well as to evaluate the effects of changes to your infrastructure or code
  • Make your code builds and deploys both automated and repeatable, leveraging CI/CD
  • Learn from your failures with an effective retrospective process for incidents and launches
  • Empower your people – software developers and SREs – and hold them accountable for the overall success of the product – which includes its reliability

DevOps and SRE aren’t at odds. You can learn a lot from both!


Scalable Internet Architectures: A Review

I just finished reading Theo Schlossnagle’s Scalable Internet Architectures. This book is seven years old, but the concepts in it are still as current and useful as they were when the book was published. If your job is to design, build, run, or manage systems at scale, this book is worth reading. Now, scale ain’t what it used to be – this book won’t provide you step-by-step instructions for building the next Google or Facebook (mostly because it focuses on technology and tools, not on process). In fact, when this book was written Facebook was probably only 10,000 servers or so. But what the book will teach you will get you a good chunk of the way to being able to build a giant.

Here are some of the things I really liked about this book:

  • It uses an actual, real-life example throughout most of the topics. Including real empirical results on various implementations. This is indescribably awesome, and related to the next thing I love about the book:
  • Theo has been there, done that, and learned from it. He exudes competence on stage or in person, and it comes through in the book. There is a bit of a ‘tude, but it’s easy to look past it.
  • The distinction drawn between performance and scalability is one that many fail to grasp. Theo explains it in some detail, including why it matters.
  • Theo is an extraordinary troubleshooter, and he presents troubleshooting concepts in this book with such clarity of exposition that it’s easy to overlook how insightful they are.
  • For a seven-year-old book, I was very surprised to see the explanation of TCP-level HA and loadbalancing without the use of hardware appliances. Back then I certainly wasn’t hip to these mechanisms.
  • Theo (and Circonus) are well-known for their focus on business-level metrics (a focus I think is spot on). His description of this and metrics in general is outstanding.
  • The discussion of RDBMS vs NoSQL (Chapter 10 – The Right Tool for the Job) is the best I’ve read on the topic. (Even though “NoSQL” wasn’t a thing when the book was written). He analyzes his workload in terms of requirements against ACID and then shows you why those semantics aren’t relevant in this case. He then walks you through the NoSQL implementation and shows you the resultant speedup. Awesome. I’ve seen fairly significant platform decisions made with far less thought and data behind them.

What’s in this book isn’t glamorous. But it works. If you want to know how to build scalable and reliable online systems, there’s nothing better than to spend a day with Theo. If you can’t do that, then read this book. 


The Freakonomics of Oncall Pay

Wearing the pager. It’s a fact of life for many of us ops folks. I’ve taken part in many a discussion from a management perspective about how oncall duty should be compensated. When the people doing the talking are pointy-haired-manager types who haven’t done oncall themselves, their starting position is often similar to the “per incident support” policies you get from a vendor – if you’re oncall and you get paged, you get paid either a set amount for that incident or an amount proportional to how long you’re working on it. So you might get say $50 per incident, or $25 per hour you’re engaged, for example. And if your oncall duty period goes quietly (no pages), you won’t get anything above your normal salary. Let’s call this the “per incident” model. The other option is a flat wage bump for the period you’re oncall – so for example every week you’re oncall, you get an extra $150 in your check. Let’s call this the “per week” model.

Pager

image provided by flickr user hades2k under a Creative Commons license.

I’ve always been a strong advocate for the “per week” model instead of the “per incident” model, because being oncall is an intrusion by your work into your personal life – it reduces your freedom during your time off work, and it does that whether or not you get paged. You can’t fly to Vegas for the weekend, you can’t get sloshed after work, and in some cases you can’t even take a shower or drop your kids at school without arranging for someone to cover you in case there’s an incident while you’re away from the keyboard. Simply being oncall affects your life, and I’ve always felt that people should be compensated for that – not just for the time where they are actively working an issue.

Then I saw _Freakonomics_ and realized there’s an even more powerful argument for the “per week” model: incentives. In the “per incident” model, your compensation goes up when the system has more problems that require oncall support. In theory, people might try to cause incidents so that they will get paged and therefore get more money. Personally I doubt that happens very often, if at all. However, there’s a more subtle influence on root cause analysis and problem management that I think does have real effects. When you’re paid in the “per week” model, you’re strongly incentivized to address the root causes of problems and improve your systems so that during your oncall week it’s more likely that you’ll be able to sleep through every night and live your life normally for that week. So when you do encounter an issue in the “per week” model, you’re going to really want to figure out what caused it, and to make meaningful changes to prevent it from happening the next time around. Fixing the problem completely does you nothing but good – you’re still going to get paid the same oncall pay for your next stint, and you’re going to have a much more pleasant experience when you’re on call. But in the “per incident” model, putting all that effort into root cause and remediation is actually going to cost you money. The next time you’re oncall, each incident you prevented from happening will mean you don’t get the money for working that incident. So consciously or not, it’s likely you’re not going to work quite as hard to make your systems better as you would under the “per week” model. I believe that this can have real effects and result in your systems being less reliable and less stable than they could be.

How does your company do oncall? Is it per week, per incident, or something else? And how do you think that is affecting incentives and outcomes? I’d love to hear your experiences!

PS: I did a little research into military hazard pay to see if there were instructive parallels there. I found Hostile Fire Pay which seems to follow the “per week” model. Anyone have information/thoughts on how incentives have affected this program?


A Fond Farewell to Cloudscaling

Earlier this month I walked into Cloudscaling’s offices for the last time as an employee, almost two and a half years after I started. I loved my job. I think Cloudscaling’s future is super bright. While there, I learned a ton, I got to work with fantastic people, and I got to work on cool stuff that I believe will really make a difference in the future of how we do computing. Those of you who have talked to me about Cloudscaling probably already know how much I believe in the company and in its mission. Cloudscaling is democratizing agile infrastructure – taking the patterns and concepts that have fueled the hypersuccess of companies like amazon, google, and facebook and building open systems that will allow everyone in the industry to benefit from them. So why did I leave?

IMAG0767

When I got there, Cloudscaling was a professional services company that was building large-scale clouds for their clients. During my time there, we transitioned to a product company, secured a series A investment round that we used to invest in building, selling, and supporting that product, and secured a series B investment round (announced last week). As VP of Engineering, my goal for the Series B timeframe was to build a sustainable and scalable technology team that could develop, maintain, and support the product and have that team be stable enough to continue doing so without my help. We got there – so now it’s time to pass the reins and let someone else take it from here.

As for myself, I started at Walmart Global eCommerce last week where I’m looking forward to taking the new ideas, concepts, and technologies that I’ve been working on and proving them out in the real world at one of the world’s largest ecommerce players.

To all my friends at Cloudscaling – I miss you all and wish you the best of success. Cloudscaling is the gold standard for Openstack-based products and thanks to all your hard work the future of computing will be here sooner than anyone thought. I will be forever grateful for the experience you gave me and I will be forever proud to be a Cloudscaling alumnus. Thank you for everything.