If you’ve met me for more than a few minutes you’ve heard me talk about my passion project, Leave the House Out of It (lthoi.com). If you’ve really paid attention to my blog posts you’ve caught that a couple years ago I rearchitected the app to move to an event-based, serverless architecture on AWS. After a year of not doing very much with the project I’ve had the itch to make some upgrades (more on this next year). Before I did, I wanted to upgrade the CI/CD pipeline I use to manage the code.
While I had moved away from containers/EKS, I did keep the containerized Jenkins that had been deployed alongside my code on the EKS cluster. I got an EC2 server, installed docker, and deployed the image there. Unfortunately, on an EC2 server Jenkins quickly became both disproportionately expensive and pretty slow. The cost was due to the inefficiency of running a Jenkins server for an app you deploy infrequently. In fact, because the app was all serverless and low volume, I was actually paying more for my Jenkins server then for all of the rest of my AWS charges combined. In spite of the cost, the performance was pretty terrible. Jenkins lost access to deploy agents across my cluster and instead churned away on an under powered EC2 server. This caused larger runs of the pipeline to take upwards of 9 minutes.
Over the last few weeks, I’ve taken the final step to the AWS native world and adopted CodeBuild, CodeDeploy, and CodePipeline to replace my Jenkins CI/CD pipeline. My application has 5 Cloud Formation stacks (5 separate Lambda functions along with associated API gateways and DynamoDB databases) and an S3 bucket and CloudFront implementation that hosts the Angular UI. I ended up with 6 separate CodeBuild projects, one to build and unit test each of the lambdas and one to build the UI. The one that builds the UI I took a shortcut and simply used the build service to also deploy. For the 5 lambdas, I wrapped them in a CodePipeline along with AWS CF Deploy jobs for each.
The only tricky part I found was that I did not want to refactor my lambdas in to an “Application” so I could not use AWS CodeDeploy out of the box. That made it difficult to use the artifacts from AWS CodeBuild. The artifacts are stored as zip files meaning I can’t directly reference them from the CloudFormation for Lambda which is expecting a direct address of where it can find the .jar file (I wrote the Lambdas in Java). I got around this by having two separate levels of “deploy”. In the first one, I use an S3 “action provider” to unzip the build artifact and drop it in an S3 bucket that I can reference from the CloudFormation. The resulting code pipeline looks like this:
The results are compelling on several fronts:
I was able to shutdown the EC2 instance and all the associated networking and storage services. It should save me a total of ~$50. It looks like on normal months I’ll be in the free tier for all of the Code* tools. So it will literally be $50/month right in my pocket. I expect all but the biggest software development shops are going to better with this model than with dedicated compute for CI/CD.
In my case, I also sped up the process considerably. I had been running full build and deploys in around 9 minutes. This was due to the fact that I was using one underpowered server. AWS CodeBuild, is running 5 2 CPU machines for build and running my deploys concurrently. That has dropped my deploy time to about 1.5 minutes. (note: In fairness to Jenkins, I could have further optimized Jenkins to use agents to deploy the AWS stacks in parallel… I just hadn’t gotten around to it)
The integration with AWS services is pretty nifty. I can add a job to deploy a particular stack with a couple of clicks instead of carefully copying and pasting long CLI commands.
In addition, this native integration makes it easier to be secure. Instead of my Jenkins server needing to authenticate to my account from the CLI, I have a role for each build job and the deploy job that I can give granular permissions to.
There are very few negatives to this solution. It does marry you to AWS, but if you have well written code and a well documented deployment process it wouldn’t take you long to re-engineer it for Azure DevOps or back to Jenkins. It’s definitely going to be my way forward for future projects. Goodbye Groovy.
The word “optimist” in the subtitle is very well earned here. Friedman’s book explains why/how someone can hope that the same technologies and macro-trends that are leading to hyper-nationalism, extreme divisiveness, and massive pollution might actually be harnessed for good. He tells that story with his typical great story telling and insightful anecdotes.
As usual I found a bunch of tidbits interesting and had a few realizations while reading the book that I’ll just list here:
One of the points that’s made over and over again that resonated with me is how fast technology is moving and how slow our institutions (particularly laws) are adapting to it. Especially since those technologies are contributing to the gridlock that’s keeping us from effectively regulating them (let alone adopting them for public purposes).
I found interesting that Friedman suggests his hometown in Wisconsin as a place well suited to adapt to the new world, mostly because of the strong community and the small size enabling a single suburb to really make it theirs. I understood the argument, but couldn’t help thinking that I’m glad I live in New York City… the combination of wealth and bright people I think will help push our leaders to adopt new advances more quickly in spite of our large size and less than community feel.
We’ve all heard Moore’s law, but I thought this explanation of just how fast computing power has grown to be compelling: “If a 1971 Volkswagen Beetle improved at the same rate as microchips did… In 2015 that Beetle could go 300,000 mph, get 2,000,000 mpg, and would cost $0.04.”
He makes a compelling argument for us entering the “cognitive” era of computing. While I still think this will be slower to take hold than people expect, it is fascinating that we have enough compute power now to just throw a bunch of data at the cloud and let the computers sort out if it means anything and what it means (as opposed to old school computing where you gave it an algorithm to make sense of the data with).
Since the book is optimitistic it makes the point that while artificial intelligence will do a lot of what we do for jobs today, that is likely to actually lead to more jobs. He brings up the example of automation in the textile industry. It actually caused MORE people to be employed making clothes because the price dropped so far that individuals owned more (far more) than one set of clothes.
If you were trying to get a gist of the overall maturity level of an IT Ops and Infrastructure organization by asking only one question, I think you’d be hard pressed to beat the question, “How mature is your DevOps?” I’d prefer that to “How do you use the cloud?” or “How fast can you provision a server?”. The reason is that DevOps is the only industry buzz word that has a built-in “why”. We get better at DevOps for a specific reason… to lower the friction for agile development teams. There’s lots of reasons to go to cloud and a lot of reasons to automate things… some of them are good and some of them are bad, but if you’re always seeking to make dev teams more agile, you’re likely on the right track.
That’s why I’m always so enthusiastic to read the “State of DevOps” each year (a report commissioned by Puppet). It helps you see where high-performing DevOps organizations (ones that can deploy to production regularly) are differentiating themselves from lower performers. This can help prioritize your strategic goals and initiatives.
This year I had three key takeaways from the report:
Automation and cloud are key to DevOps, but definitely not the only keys. 62% of companies stuck in “mid-evolution” on their DevOps Journey say that they have high levels of automation. 65% of those “mid-evolution” companies are on public cloud, but only 20% using it effectively.
The DevOps Journey is something we’ll be focusing on for a while. From 2018 to 2021 only 8% of companies graduated to high performing DevOps teams (going from 10% to 18% of all companies).
Platform Teams are becoming key. This is something that I’ve been working with my customers at Kyndryl on for the last couple years. Platform teams are highly correlated not only with high performance on the devops scale, but also with employees feeling like they know their role.
Check out the report here and let me know if you think I missed something.
I can usually tell by 15 pages in to a self-help book whether it’s going to resonate with me; usually, it’s not going to. I can’t stand being told about how much I can get done in the morning if I start at 3am or that if I work 10x harder than everyone else it will pay off. These are obvious, hard work usually pays off (although if you think it automatically does, you should read Peak by Anders Ericson).
Dweck’s book is entirely different. It suggests that if you want to find fulfilling success, you should look at the world a little differently. You should stop asking yourself if you are successful and start asking what you can do to grow. I have to admit that 15 pages in to this book, I thought the idea was too simple to be useful. The more I read though, the more I liked the way it challenged me to think about life.
I’ve always been one to enjoy my successes with a little pat on the back (or more likely a celebration scotch). What I considered a success though was often what the rest of the world would view that way. For example, if my team at work won a new big project I would celebrate. If we had a month where we didn’t get a new project, I’d feel bad. It always felt a little cheap celebrating when the project win was just lucky and it was always tough to feel too bad when we made a great pitch to a customer that I knew would help them in the long-run, but that got scuttled when a key stakeholder retired. This book gave me a better way to look at these projects than pure outcome based success or failure.
Dweck would have us focus on “growing”. In the above example, the team clearly grew in our capability to identify and sell key projects by making that presentation (maybe we also learned how to identify clients we shouldn’t bother with). Dweck points out that it’s not that luck doesn’t exist; it’s just that you don’t want to let let your luckiest moments be the way you define yourself. This construct has given me a more consistent way to approach my days. Even on a bad luck day I start thinking about what I can learn from the situation and the best possible way I can move from here. If I’ve done my best and learned, I feel content and even more ready to take on the next day.
Jeff Bezos built a company that’s worth about the same on the stock market ($1.67T) as the entire circulating supply of Pakistani Ruppees ($1.68T). That means you could BARELY trade every Pakistani Rupee in existence for all the stock in Amazon. I guess I’ll read some of his wisdom and see what he has to say.
I found the book pretty interesting. It’s not exactly a page turner and there are parts that are a little repetitive (the guy has clearly practiced telling his life story), but you can leaf through those parts quickly and be back in good material.
It starts with a republishing of all of his shareholder letters as CEO of Amazon (the last one you’ll have to find on the internet until they publish a new edition). They’re a fantastic read. It’s almost a retelling of the whole internet. From an optimistic Bezos reporting that Amazon had “established long-term relationships with many strategic partners, including America Online, Yahoo!, Excite, Netscape, GeoCities, AltaVista, @Home, and Prodigy.” in 1997. To a one word title of “Ouch” in 2001 after he saw some 90% of the value of Amazon wiped out. To explanations of how Amazon refuses to take a short-term perspective, even in a world where quarterly earnings are analyzed so heavily. To discussions of how, being a big company allows it to make big bets like the Fire Phone (dud) and AWS (clear winner) that it might not always win. To the mentality of “customer first” that Bezos uses to push employees because customer loyalty only lasts until your competition shows your customer something they never knew they wanted.
The remainder is a collection of Bezos’ speeches. This got repetitive with stories of how he admired his grandfather for being resourceful on a remote farm and sticking up for his mom when she was a pregnant high school. You should feel free to skip around, but don’t miss the speech on Blue Origin (his space company). The company is endeavoring to build the “infrastructure” for future generations to reinvent industries in space. He talks about the need to have big Lunar landers to send enough equipment to the moon so that the natural resources on the moon can start to be used for construction of more items in space (this makes sense since it’s much more efficient than trying to launch HEAVY things out of earth’s gravitational pull). It’s a fascinating way to choose to spend a personal fortune that is roughly 2,000,000 times larger than the average American’s.
Because of where I am professionally (with Kyndryl about to break off of IBM and provide me the opportunity to really grow the size, number of services, and the value we can provide the customers of my Cloud Advisory Service practice), I found the 2016 shareholder letter the most impactful. He talks about techniques for always staying in Day 1 because “Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it’s always day 1.” Resisting the urge to just get what you can from customers and instead keep innovating for customers, is the energy we’re going to need at Kyndryl.
I rarely bother to post books I read for pleasure on the blog, but I thought that a lot of the people that I know professionally may be interested in this one. I started reading it in November when we were just starting to be able to think about what a post-Trump America might look like. After four years of hating the news so much I rarely watched it, I was trying to remember what it felt like when I had agreed to let an Obama staffer live in my extra bedroom when the 2008 campaign was tight in Pennsylvania. This book did not restore that optimism.
The book is simultaneously interesting and boring because President Obama really dives in to telling about what his goals were, what challenges he met, and how he and the team dealt with them. The book would be more inspirational, and more of a page turner, if he left out those details. It would also feel like a commercial for his presidency instead of a memoir. If you’re in the mood to hear the details of the political process and how Obama and team tried to manipulate it, you’ll enjoy this book. If you’re looking for a compelling narrative that’s fun to read, there’s an abridged version.
One last recommendation. Definitely get the audiobook, he reads it himself and I think being able to hear his emotions and emphasis while he reads is worth the slower pace.
I found a lot of great nuggets in this book, but I’m not sure I’d recommend it to everyone. If you’re trying to sell your boss on a move to AWS, definitely give her this book. If you’re trying to figure out what your IT priorities are, I’d recommend Leading the Transformation or The Pheonix Project. If you insist on reading something that’s written by an AWS Employee, I’d even try War, Peace, and IT by Mark Schwartz.
While I found some nuggets that I enjoyed and found informative, I found most chapters just identifying a problem and then explaining how being in the cloud mitigates that problem. It’s not really surprising considering it’s 53 chapters in only 298 pages (at least how Kindle counts it). One big exception to this is Chapter 5 and Chapter 53 outline processes and are a little more detailed (but each of them could be an entire book). The other little nuggets that I liked were:
Chapter 23’s discussion of the future of Managed Services in the cloud was interesting. I believe the combining of migrating to cloud and managing applications (essentially turning them in to SaaS) was a great point. Also, the highlighting of DevOps as an important feature of an MSP.
I liked Chapter 33’s discussion of Hybrid Cloud. Too often AWS and their surrogates talk about Hybrid Cloud as a fantasy. Unless you have the kind of capacity that it’s worth building an entire datacenter that runs just for you (like Netflix), you’re probably better off having everything in the cloud. The problem is, you can’t just put it there by magic. Orban does a nice job recognizing Hybrid Cloud as a stage, maybe a VERY VERY long one, but not a destination.
I liked the discussion in Chapter 46 about a company’s vision for moving from Centralized IT to Exponentially Growing IT.
2020 was a year where I expected to just entrench and work on establishing myself at IBM. It was a really good year for that, and (thanks to a lack of travel and a bit of a COVID slow period) also allowed me to set some pretty ambitious fitness goals. That leaves me feeling pretty good about what I was able to accomplish, making it possible for 2021 to be more about refining and doubling those successes.
Specifically, I have set three goals:
I want to really focus on working with CIOs/CTOs to make their “Infrastructure and Operations” teams in to a value add part of IT. As I spend more time with more companies I see them consistently able to articulate how a better customer experience or using AI can bring value to their bottom line, but then they are unable to understand why they can’t securely and efficiently create the platforms necessary for those innovations to occur on top of. I am intending to spend 2021 moving beyond the containerization, cloud, and automation discussions I’m leading customers on now and pull that together in to a more strategic view of the organization. You’ll see this in more frequent observations on Twitter, some blog posts, and (if you’re lucky enough to be a client) in a new model we’ve been working on for how to change your culture and tooling.
I am planning to be much better about keeping up with my professional contacts. I realized yesterday that I haven’t done an inventory of my contacts since very early 2020. My plan will be to do better at connecting with people I’ve worked with on LinkedIn and reaching out to folks I worked with a few years ago to see how they’re doing.
I managed to finish 2020 in reasonable shape. I have not traditionally been good at staying in good shape once I reach it. I tend to yo-yo back and forth between in shape and out of it. My goal this year is to stay around my current level of fitness for the whole year.
If you’ve read much of Gene Kim’s work you’ve heard of the transformation that HP Printers made to their firmware development process. Kim cites it so frequently because it is an example of one of the oldest software development groups around making software that’s some of the hardest software to test (printer emulators anyone?) transforming from a waterfall development process to an agile one. Gruver and Mouser are the executives behind that transformation and this book is full of great lessons that everyone who’s had an IT department for longer than Amazon’s been a bookstore can learn from.
Let me give you three of my favorite things about this book, and I encourage you to read it and come up with your own:
In the very first chapter they talk about how many companies make the mistake of trying to start an enterprise agile transformation by just having one team transform. It doesn’t work because, at least in most companies, a meaningful development team can’t go to production on its own. You have that one transformed team running scrums and using whiteboards full of sticky notes to deliver code to an integration test environment where it will sit for 6 months. The key is to find ways to iteratively improve the whole organization (or at least ALL the parts that have to develop together) towards agile.
Having worked in technology all my life, I had never really considered why software development should be managed so much differently then your other cost centers. I really liked their point that what makes software development so hard to plan for is that it’s hard to see and often changes from looking 80% done to looking destined for the trash heap overnight. That ever-changing aspect is also what’s great about software. It’s hard to predict where you’ll be in 6 months in a software dev project, but it’s much much easier to change your mind today about where you want to be tomorrow. Doing agile software development makes so much sense because it’s just using the inherent strengths of software rather than trying to mold it in to a waterfall techniques.
Finally, this book isn’t afraid to make the tough point that executive teams must understand technical concepts like continuous delivery to be successful. They can’t just be great people managers or inspirational leaders. They have to know when to make exceptions to the “no branches” rule and when to require automated tests. These are things that reasonable development managers can disagree on, but a knowledgeable executive must set a standard for.
I confess, Value Stream Mapping as a technique is something I already do with clients and I was reading this book mostly to make sure that I could say that I did (and now it’s on the Internet!). I will say that after being taught value stream mapping primarily as it relates to the DevOps movement and CI/CD pipelines, I was surprised to see how general the approach is. It seems like it would be really valuable for enterprises to do in areas outside of manufacturing (it’s based on Lean and started as one of the Toyota techniques) and IT. Maybe someday I’ll focus more on my MBA and less on my Comp Sci background and actually use it that way.
In this book “review”, I thought I’d spend some time in this blog post talking about the nuggets throughout the book that are a bit different when our team looks at creating a value stream to find opportunities for IT Automation. Feel free to leverage this in your business, or give us a call and we can help you get started!
First of all, we are usually doing a couple Value Stream Maps in our spare time in the first week of an Ansible Tower Foundation engagement. While also installing Ansible, connecting it to our client’s Git, Active Directory, Credential Management, etc… Consequently they are usually a lot less involved. This different results in a few big changes:
The scope has to be different. The book talks about focusing on customer value streams being from “ring to ring” (from the phone call to order to the ring of the cash register). We still like to focus on this, but we often take IT Operations customers ring (a ServiceNow request to get a new VM) to service fulfillment (a notification to the user that they can log in).
The session itself is shorter. I like to try to keep it to either one hour or two depending on the complexity. This allows for nice crisp time keeping. 10 minutes to intro the concept, 20 minutes for current state, 10 minutes to brainstorm opportunities for improvement, 20 minutes for future state.
I think the charter becomes MORE important when you want to do a short session. It is important to have your sponsor identify the limitations of the exercise and the precise scope. This will limit wasteful conversation during the workshop.
Since our focus is typically on finding out how best to improve an IT Operations value stream via automation, we are looking for slightly different things in the value stream map:
We do not limit ourselves in looking only for automation opportunities. We are looking for all manner of improvements… avoiding batching, identifying opportunities to run in parallel, etc… It’s not our goal to fix those as part of the project (though IBM has some excellent process consultants). It’s merely our goal to avoid the investment of automating a process that shouldn’t exist at all or is not a bottleneck.
The Gemba walk is usually not a physical one. I like to have actual operators walk an actual ticket on screen where possible. In longer running processes it may be constructive to view several tickets in different phases.
We use all of the resulting automation “kaizens” to start our clients’ automation backlog. Because of the value stream mapping we can be clear about just how much value can be expected out of each automation.
Overall, I definitely recommend reading the book. Even if you’re only planning to use the technique for CI/CD or IT Operations value streams. It will give you a lot of perspective on the history of the technique and its uses in the wider enterprise, in addition to arming you with the vocabulary you need to leverage it for your use case.