Book Review: Ahead in the Cloud by Stephen Orban

I found a lot of great nuggets in this book, but I’m not sure I’d recommend it to everyone. If you’re trying to sell your boss on a move to AWS, definitely give her this book. If you’re trying to figure out what your IT priorities are, I’d recommend Leading the Transformation or The Pheonix Project. If you insist on reading something that’s written by an AWS Employee, I’d even try War, Peace, and IT by Mark Schwartz.

While I found some nuggets that I enjoyed and found informative, I found most chapters just identifying a problem and then explaining how being in the cloud mitigates that problem. It’s not really surprising considering it’s 53 chapters in only 298 pages (at least how Kindle counts it). One big exception to this is Chapter 5 and Chapter 53 outline processes and are a little more detailed (but each of them could be an entire book). The other little nuggets that I liked were:

  • Chapter 23’s discussion of the future of Managed Services in the cloud was interesting. I believe the combining of migrating to cloud and managing applications (essentially turning them in to SaaS) was a great point. Also, the highlighting of DevOps as an important feature of an MSP.
  • I liked Chapter 33’s discussion of Hybrid Cloud. Too often AWS and their surrogates talk about Hybrid Cloud as a fantasy. Unless you have the kind of capacity that it’s worth building an entire datacenter that runs just for you (like Netflix), you’re probably better off having everything in the cloud. The problem is, you can’t just put it there by magic. Orban does a nice job recognizing Hybrid Cloud as a stage, maybe a VERY VERY long one, but not a destination.
  • I liked the discussion in Chapter 46 about a company’s vision for moving from Centralized IT to Exponentially Growing IT.

2021 New Years Resolutions

2020 was a year where I expected to just entrench and work on establishing myself at IBM. It was a really good year for that, and (thanks to a lack of travel and a bit of a COVID slow period) also allowed me to set some pretty ambitious fitness goals. That leaves me feeling pretty good about what I was able to accomplish, making it possible for 2021 to be more about refining and doubling those successes.

Specifically, I have set three goals:

  1. I want to really focus on working with CIOs/CTOs to make their “Infrastructure and Operations” teams in to a value add part of IT. As I spend more time with more companies I see them consistently able to articulate how a better customer experience or using AI can bring value to their bottom line, but then they are unable to understand why they can’t securely and efficiently create the platforms necessary for those innovations to occur on top of. I am intending to spend 2021 moving beyond the containerization, cloud, and automation discussions I’m leading customers on now and pull that together in to a more strategic view of the organization. You’ll see this in more frequent observations on Twitter, some blog posts, and (if you’re lucky enough to be a client) in a new model we’ve been working on for how to change your culture and tooling.
  2. I am planning to be much better about keeping up with my professional contacts. I realized yesterday that I haven’t done an inventory of my contacts since very early 2020. My plan will be to do better at connecting with people I’ve worked with on LinkedIn and reaching out to folks I worked with a few years ago to see how they’re doing.
  3. I managed to finish 2020 in reasonable shape. I have not traditionally been good at staying in good shape once I reach it. I tend to yo-yo back and forth between in shape and out of it. My goal this year is to stay around my current level of fitness for the whole year.

Book Review: Leading the Transformation by Gruver & Mouser

If you’ve read much of Gene Kim’s work you’ve heard of the transformation that HP Printers made to their firmware development process. Kim cites it so frequently because it is an example of one of the oldest software development groups around making software that’s some of the hardest software to test (printer emulators anyone?) transforming from a waterfall development process to an agile one. Gruver and Mouser are the executives behind that transformation and this book is full of great lessons that everyone who’s had an IT department for longer than Amazon’s been a bookstore can learn from.

Let me give you three of my favorite things about this book, and I encourage you to read it and come up with your own:

  1. In the very first chapter they talk about how many companies make the mistake of trying to start an enterprise agile transformation by just having one team transform. It doesn’t work because, at least in most companies, a meaningful development team can’t go to production on its own. You have that one transformed team running scrums and using whiteboards full of sticky notes to deliver code to an integration test environment where it will sit for 6 months. The key is to find ways to iteratively improve the whole organization (or at least ALL the parts that have to develop together) towards agile.
  2. Having worked in technology all my life, I had never really considered why software development should be managed so much differently then your other cost centers. I really liked their point that what makes software development so hard to plan for is that it’s hard to see and often changes from looking 80% done to looking destined for the trash heap overnight. That ever-changing aspect is also what’s great about software. It’s hard to predict where you’ll be in 6 months in a software dev project, but it’s much much easier to change your mind today about where you want to be tomorrow. Doing agile software development makes so much sense because it’s just using the inherent strengths of software rather than trying to mold it in to a waterfall techniques.
  3. Finally, this book isn’t afraid to make the tough point that executive teams must understand technical concepts like continuous delivery to be successful. They can’t just be great people managers or inspirational leaders. They have to know when to make exceptions to the “no branches” rule and when to require automated tests. These are things that reasonable development managers can disagree on, but a knowledgeable executive must set a standard for.

Book Review: Value Stream Mapping: How to Visualize Work by Karen Martin

I confess, Value Stream Mapping as a technique is something I already do with clients and I was reading this book mostly to make sure that I could say that I did (and now it’s on the Internet!). I will say that after being taught value stream mapping primarily as it relates to the DevOps movement and CI/CD pipelines, I was surprised to see how general the approach is. It seems like it would be really valuable for enterprises to do in areas outside of manufacturing (it’s based on Lean and started as one of the Toyota techniques) and IT. Maybe someday I’ll focus more on my MBA and less on my Comp Sci background and actually use it that way.

In this book “review”, I thought I’d spend some time in this blog post talking about the nuggets throughout the book that are a bit different when our team looks at creating a value stream to find opportunities for IT Automation. Feel free to leverage this in your business, or give us a call and we can help you get started!

First of all, we are usually doing a couple Value Stream Maps in our spare time in the first week of an Ansible Tower Foundation engagement. While also installing Ansible, connecting it to our client’s Git, Active Directory, Credential Management, etc… Consequently they are usually a lot less involved. This different results in a few big changes:

  1. The scope has to be different. The book talks about focusing on customer value streams being from “ring to ring” (from the phone call to order to the ring of the cash register). We still like to focus on this, but we often take IT Operations customers ring (a ServiceNow request to get a new VM) to service fulfillment (a notification to the user that they can log in).
  2. The session itself is shorter. I like to try to keep it to either one hour or two depending on the complexity. This allows for nice crisp time keeping. 10 minutes to intro the concept, 20 minutes for current state, 10 minutes to brainstorm opportunities for improvement, 20 minutes for future state.
  3. I think the charter becomes MORE important when you want to do a short session. It is important to have your sponsor identify the limitations of the exercise and the precise scope. This will limit wasteful conversation during the workshop.

Since our focus is typically on finding out how best to improve an IT Operations value stream via automation, we are looking for slightly different things in the value stream map:

  1. We do not limit ourselves in looking only for automation opportunities. We are looking for all manner of improvements… avoiding batching, identifying opportunities to run in parallel, etc… It’s not our goal to fix those as part of the project (though IBM has some excellent process consultants). It’s merely our goal to avoid the investment of automating a process that shouldn’t exist at all or is not a bottleneck.
  2. The Gemba walk is usually not a physical one. I like to have actual operators walk an actual ticket on screen where possible. In longer running processes it may be constructive to view several tickets in different phases.
  3. We use all of the resulting automation “kaizens” to start our clients’ automation backlog. Because of the value stream mapping we can be clear about just how much value can be expected out of each automation.

Overall, I definitely recommend reading the book. Even if you’re only planning to use the technique for CI/CD or IT Operations value streams. It will give you a lot of perspective on the history of the technique and its uses in the wider enterprise, in addition to arming you with the vocabulary you need to leverage it for your use case.

Book Review: Unicorn Project

I actually finished the Unicorn Project a couple weeks ago and didn’t take great notes on it, so this won’t be as extensive of a review as I often provide. I did enjoy the book, and like it’s predecessor (the Pheonix Project) it does a great job of showing how some of the best processes in the industry can come to life in a specific company. It does a beautiful job covering both the DevOps revolution at the team level with containerized dev environments, versioned APIs, and everything else required to decouple deploys, automate test, and implement CI/CD. It also touches on the transformations necessary to abstract your mainframe/legacy environment and the evolution of management thinking around experimenting with parts of your org while gaining efficiencies in others.

My only real critique is that it’s not very realistic. It demonstrates how powerful these concepts CAN be by taking an ideal company. The ficticious company in the book has a CIO who understands the need to empower engineers and keep the vision high-level (rather than command and control), a great agile coach, an operations team that wants to put in the extra work required to give control to the developers, and a development team that is excited about experimentation… not something I often see in clients. It also glosses over things that I’ve seen very difficult. At one point it takes a development team over three weeks of hard work to put together an automated test suite for their application that includes (presumably) either mocks or test instances of upstream and downstream systems. They’re right to call this hard work, but I’ve rarely seen developers tackle it in a few weeks, even on a backend system without a UI.

The best use of this book is to give it to a CIO or CTO who struggles with what success looks like in their organization. If you still have someone who’s trying to count server consolidation ratios or incident tickets, give them this book.

Book Review: Prediction Machines

If you’re going to be working in technology for more than 10 years, you should read this book. It’s written by a set of economists associated with the Creative Destruction Lab at the University of Toronto. As you might imagine from an incubator with that name, they have made many investments in Machine Learning companies. The observations in the book are less about how machine learning works (though there is obviously some introduction required) and more about the implications for the wider economy.

As the title suggests, they believe that the current advancements in artificial “intelligence” are not related to general intelligence, but are making advancements in computers’ ability to distill lots of data about what humans have done to make a prediction. For example, Google’s image recognition doesn’t actually “recognize” images, it is only able to say, “based on other sets of pixels that people have identified as tables, I think that there is a 98% chance that is a photo of a table.” This key assumption then allows them to predict how further advancements in these prediction machines will impact the kinds of companies, jobs, social programs, etc… that will go up (or down) in economic value.

As they put it, “Economics offers clear insights regarding the business implications of cheaper prediction. Prediction machines will be used for traditional prediction tasks (inventory and demand forecasting) and new problems (like navigation and translation). The drop in the cost of prediction will impact the value of other things, increasing the value of complements (data, judgement, and action) and diminishing the value of substitutes (human prediction).”

The remainder of the book explores logical deductions from that fact. Some of the more interesting of which are:

  • Some areas of prediction only become real game changers when the machines reach near 100% certainty. For example, imagine if when Amazon made a recommendation they could be 98% sure that you’d want to buy the product… would they send automatically send the product to a distribution center near your house? Maybe just send it straight to you and let you return one product in fifty that they’re actually wrong about.
  • The value of data is going up quickly.
  • Machines and humans have distinct strengths… humans tend not to be thorough while machines can’t discover independently a new type of data that should be incorporated. The two will be paired with machines doing the prediction and humans doing the judgement.
  • They identify 6 types of risks associated with Prediction Machines. The most interesting of which is that on things like driving an airplane, humans are getting less and less skilled at it while auto-pilot does more.

“Book” Review: Beyond the Phoenix Project

This “book” is actually a set of conversations recorded by John Willis and Gene Kim. The main thing I can offer by way of advice is; absolutely do NOT buy a printed version, this isn’t really a book. The book is apparently a inspired by a similar “Beyond the Goal” lecture done by Eliyahu Goldratt who authored “The Goal” the novel about Lean that inspired Kim to write a novel to explain DevOps. I highly recommend getting the book, even if you only listen to the modules on Lean and Safety Culture.

The first two chapters are not the most interesting. They cover Goldratt and Deming. They aren’t uninteresting, but it was more detail than I needed on the individuals.

The next two chapters are fantastic and go through a lot of the concepts that they borrowed from when creating DevOps. Goldratt and Deming as the people who they borrowed from most thoroughly when coming up with some of the DevOps concepts. They then turn to Lean and Safety Culture, covering the history of each discipline and the most important parts that were borrowed. I learned a TON from these sections. I particularly enjoyed the section on the Andon Chord (which is intended to be pulled on a production line if anything goes wrong). I had of course glanced at the concept before, but had always dismissed it as something that was only really useful for the most mature organizations. Kim and Willis exposed me to how Toyota actually deals with it. Learning that they actually panic when it’s not pulled enough and that not every pull stops everything, it’s more of an escalation. The comparisons to a pipeline are obvious. The next chapter is actually a recording of a conversation between leaders in Lean, Safety Culture, and DevOps there were a few good nuggets, but I generally would say you can skip it.

The only remaining module that I found interesting was the case studies. Particularly the stories about Target and Nordstrom. It also closes with an interesting observation that Willis had with the CFO of Goldman Sachs. Apparently he is tracking roll outs of DevOps technologies specifically by name (Kafka is mentioned specifically). It says something about critical agility is becoming, independent of the business functionality that’s being built on top of it.

My Next Move: IBM “Journey to the Cloud”

I’m really excited to announce that I’ve taken a job with IBM, in their advisory cloud consulting service. The team I’ll be leading will be looking at how organizations can architect internal and external cloud solutions on both IBM/RedHat products/clouds as well as on other external providers like AWS/Azure/GCP. I will be based out of New York City, but have responsibility across all of North America. I’m excited about this opportunity both because I think IBM is well positioned to be successful and because I think this job is a great fit for me personally. I’ll use the balance of this post to give you two reasons for each.

I am excited that IBM has a chance to be the preferred provider for people who care deeply about their internal/hybrid cloud. With the acquisition of RedHat and its suite of tools for containers/kubernetes and devops, IBM has more capability than the leading external cloud providers to help you build your internal cloud and then reach out. I believe this will be important to companies that have legacy applications that are not cloud native (and therefore run inefficiently in a cloud pricing model), have large investments in their datacenters and enough demand to cost-effectively use them, and/or have applications that for compliance/security reasons they’re not comfortable running from the public cloud.

I am also excited that IBM can be a trusted advisor to the “second wave” of companies moving to the cloud. The first wave of cloud computing was dominated by the Netflix, Facebook/Instagram, Uber, Twitter, etc… unicorns. They still represent a huge portion of the current public cloud resource consumption. These first-wave companies are, primarily, software engineering companies with no legacy software. They have been able to capitalize on cloud models because they could invest in the engineering to do so with only a few applications that serve billions of people. For companies in this second wave (I’m mostly familiar with Financial Services… but much of the Fortune 500 is with them), the cloud is more challenging. It requires deciding when to re-platform an application that only has 2 developers supporting it. It requires retraining/retooling a development staff (or hiring one and teaching it the intricacies of a business that’s more sophisticated than a tweet). It requires replacing legacy infrastructure services/products with what the new cloud-based applications require. These second-wave companies have relied on IBM for decades for their infrastructure, middleware, etc… and I am happy to be part of the group that will let them rely on us for cloud too.

Personally, this gives me a chance to stay in New York City! I have started to fall in love with this city and the friends I have met here. If you’ve known me for a while you know that I’ve moved a lot for jobs in my career; in fact, this will be the first time that I have taken a job in the same city as the previous one. I’m looking forward to putting down some roots and really enjoying all that the Big Apple has to offer (all while getting to travel a lot!).

Finally, I’m also excited to be back in consulting. It gives me a chance to do what I do best, identifying opportunities. While I think my last two jobs (standing up Kubernetes at RBC and completing a large scale software development project at Fannie Mae) have shown I can excel at implementing/operationalizing; I am excited to get back to what I think I do best, helping people identify opportunities and proving out their value. I’m sure in a few months when I hear that a client has dropped the ball on a project I proposed, I will remember the brown grass on the other side of this hill. For now though, I am really excited to get to know lots of clients and see what they’re doing with the cloud!

If you’ve made it this far, I thank you for taking an interest in what’s going on with me. Hopefully it’s because you think we can collaborate in the new position? My group will be working with all of the major external cloud providers and we’re going to be agressively hiring engineers, developers, architects, and consultants who can help IBM’s clients with their “Journey to the Cloud”. If you’re interested in seeing how we can partner, reach out to me on LinkedIn.

Clever Idea: Leave The House Out of It (my app)

One of my favorite experiences has always been going to Las Vegas on an NFL Sunday.  Looking at all of the potential wagers, reading up on all of the games, talking with my friends about what they see, and placing a few bets while drinking that stale coffee at an abandoned Blackjack table is a great way to start a day.  The best part of any weekend in Vegas though, is sitting at the little bar outside the sportsbook at the Venetian (the one where you can get free drinks by occasionally playing $1 Video Poker) and watching 14 games over 12 hours on a dozen big screen TVs.  I try to make it out a time or two per season to risk a couple hundred dollars.

Unfortunately, as a software developer and lover of discrete math… I know just how stupid this is mathematically.  The longer you gamble on anything against the house (the gambler’s term for the casino), the less likely you are to be ahead.  Casinos use a “spread” to make sure they’re getting more than their fair share of every bet. The easiest way to illustrate this is to look at a bet that is perfectly even, like the flip of a coin.  The way the House prices bets, if you wanted to win $100 on “Heads” you would have to bet $110; same with tails, risk $110 to win $100. While you might win this once or twice, the House will eventually catch up with you.  That’s why I only make Football bets in Vegas, I only do it a couple times a year, and I never bet much.

Over the last few years though, I’ve found better and better ways to enjoy many Sundays the same way I enjoy them in Vegas (although I trade the Venetian for my local sports bar with Sunday Ticket) and without paying a tax to the House.  It started 6 years ago when my brother and I hit upon the idea that each of us could pick a few NFL games for a friendly wager and the other person had to take the “House” side, but couldn’t charge the tax. This way as long as I picked better than he did, I would end up winning money.  This allowed us to watch the games together and enjoy them like we were at the casino, but without the need to fly to Vegas and pay the casinos (worst case scenario, I lost money and was paying my brother).

Four years ago the idea grew again, we wanted to find a way to reduce the record keeping burden and include more of our friends.  That year, the first version of Leave The Houst Out of It (LTHOI.com) was born. The idea behind it is a simple extension of what we’d been doing for a couple of years.  We would build a “League” of friends and when any one of them wanted to pick a particular game, we would all take a tiny position against them.  Having a lot of people playing with us made the system work even better; now when someone takes a bet that I am forced to take the other side of, I am sharing the responsibility with all the other people in the league.  Over time, these “house positions” tend to even out such that we are truly “Leaving the House Out of It”.

We soon discovered that this is an idea that doesn’t even require that money be involved.  With the website version, instead of actual gambling… my friends and I discovered that we can just play for pride.  Our Win-Loss record and earning against each other are tracked alongside the rest of the members of the “league” and we can battle for who has the best record.  Now I get all of the thrill of the Venetian with none of the risk!

Book Review: War, Peace, & IT by Mark Schwartz

The book bills itself as being for the CEO that’s looking to make sense of how the changes in IT should change the way they view the CIO and IT. It mostly lives up to that, though you’ll need to have some knowledge of agile/cloud. As a technologist, I also found it valuable to hear the language and examples the author uses because I think they’ll be valuable as I talk to development managers and infrastructure executives who are trying to figure out how to sell the agile/devops/product/team approach to a business that is used to thinking about “keep the lights on”/multi-year initiatives/chargeback. Overall, I’d recommend the book as a quick read. Mostly to remind you of things you already know/feel and to give you words that’ll help you say them.

A few things I liked in particular:

  • One of the primary points that he makes is that the world is changing so quickly right now that the biggest risk facing most companies is that they won’t be able to change fast enough. With that in mind he argues that DevOps and Cloud are actually investments in Risk Management. I find this powerful for two reasons:
    1. In many companies it’s the people carrying the title “Risk Manager” who want to stop things like deployments during business hours and hosting data outside the data center. This points out that (often, not always) those risk managers are protecting the company from the wrong/lesser risk of small outages at the expense of the big risk (not being able to innovate as fast as your competitors).
    2. It helps justify the cost of some of the DevOps Transformations and Cloud Transformations that need to happen. Often these are hard to justify when they’re compared with the opportunity to deliver on business requirements or the opportunity to use automation to save money in infrastructure. Framing DevOps as a risk management play, helps explain why there’s a need to invest in making the company better able to change.
  • He actively fights the urge to make “IT as a Business”. Arguing that it needs to be part of the company like finance or marketing. He, rightly, points out that in most companies IT is expected to operate like a sort of sub-contractor but not given the ability to hire freely and set salaries optimally, IT can’t choose to avoid business customers that become untenable, it can’t charge more for higher demand items instead of passing on costs (at least in most chargeback models I’ve seen). Additionally, making IT a sub-contractor means adding red tape to the change process. IT is inscentivized to offer standard products, to require the business “pay” for changing requirements, etc…
  • He uses one of my favorite business people analogies about agile software development vs project based development. That it is like buying options. Business people understand the risk of buying assets vs buying options… most even remember that moment of surprise in business school when you realized just how different the price is between an option and an asset. Agile software development is like that. You can build an MVP for the smallest slice of the market you can think of and then wait and see if they like it before agreeing to pay for the rest of the project.