Book Review: War, Peace, & IT by Mark Schwartz

The book bills itself as being for the CEO that’s looking to make sense of how the changes in IT should change the way they view the CIO and IT. It mostly lives up to that, though you’ll need to have some knowledge of agile/cloud. As a technologist, I also found it valuable to hear the language and examples the author uses because I think they’ll be valuable as I talk to development managers and infrastructure executives who are trying to figure out how to sell the agile/devops/product/team approach to a business that is used to thinking about “keep the lights on”/multi-year initiatives/chargeback. Overall, I’d recommend the book as a quick read. Mostly to remind you of things you already know/feel and to give you words that’ll help you say them.

A few things I liked in particular:

  • One of the primary points that he makes is that the world is changing so quickly right now that the biggest risk facing most companies is that they won’t be able to change fast enough. With that in mind he argues that DevOps and Cloud are actually investments in Risk Management. I find this powerful for two reasons:
    1. In many companies it’s the people carrying the title “Risk Manager” who want to stop things like deployments during business hours and hosting data outside the data center. This points out that (often, not always) those risk managers are protecting the company from the wrong/lesser risk of small outages at the expense of the big risk (not being able to innovate as fast as your competitors).
    2. It helps justify the cost of some of the DevOps Transformations and Cloud Transformations that need to happen. Often these are hard to justify when they’re compared with the opportunity to deliver on business requirements or the opportunity to use automation to save money in infrastructure. Framing DevOps as a risk management play, helps explain why there’s a need to invest in making the company better able to change.
  • He actively fights the urge to make “IT as a Business”. Arguing that it needs to be part of the company like finance or marketing. He, rightly, points out that in most companies IT is expected to operate like a sort of sub-contractor but not given the ability to hire freely and set salaries optimally, IT can’t choose to avoid business customers that become untenable, it can’t charge more for higher demand items instead of passing on costs (at least in most chargeback models I’ve seen). Additionally, making IT a sub-contractor means adding red tape to the change process. IT is inscentivized to offer standard products, to require the business “pay” for changing requirements, etc…
  • He uses one of my favorite business people analogies about agile software development vs project based development. That it is like buying options. Business people understand the risk of buying assets vs buying options… most even remember that moment of surprise in business school when you realized just how different the price is between an option and an asset. Agile software development is like that. You can build an MVP for the smallest slice of the market you can think of and then wait and see if they like it before agreeing to pay for the rest of the project.

Clever Idea: Serverless is Commitmentless

I’ve decided to start a bit of a series of blog posts called “Clever Idea”. I’ll use the space to talk about something I’m working on either in my hobby project (an app for picking Football games with your friends) or at work that I think is clever. The intent will partly be to share a technique or technology that I think some people reading might be interested in. I’ll also be hoping that occasionally someone points out to me that I’m not as clever as I thought and there’s actually a better way to accomplish what I’m trying to accomplish.

This season will be the first season that I’m planning to open up my little app to more than just my immediate friends. I’m planning to let my friends invite their friends to form leagues and hopefully end up with a couple dozen leagues. Being somebody who has floated from infrastructure to software development and back again throughout my career, I know that this doesn’t just mean developing new screens for adding leagues and registering new players… This means, planning out my infrastructure so that the application can scale appropriately.

Like any good architect, I started with the usual questions… how many concurrent users do I think I will have? How many rows in the databases? etc… The problem here is, like most software projects, I don’t really know. It could be that no one will be interested in the app and I’ll only have a couple users. It could also be that sometime during the season it spreads like wildfire and I have a hundred leagues all of a sudden.

I have a micro-service based architecture. Last season it consisted primarily of containerized Spring Boot apps running on a EKS (Kubernetes) cluster and communicating with a relational database deployed on AWS RDS. This architecture is certainly scalable relative to the vast majority of monolithic applications that exist in enterprise IT today. I had an auto-scaling group setup to support the EKS cluster and it would scale down to two nodes and up as far as I needed. Without re-architecting the database, it probably could have scaled to several hundred leagues. It’s pretty flexible, allowing my AWS bill to run from ~$200/mo (a small RDS server, a couple small K8s application nodes, and the EKS control plane) to a cluster/db large enough to support a few hundred leagues with the only down-time being when I switched from smaller DB instances to larger ones.

It’s not nearly as flexible as Lambda / DynamoDB though. When I rebuilt the application this year it was with that flexibility specifically in mind. The app now runs entirely on Lambda services and stores data (including cached calculations) in DynamoDB. Both of these are considered serverless which means AWS ensures that my Lambda services always have a place to run and my DynamoDB data is always available, actually providing more reliability/availability than the K8s/Cluster architecture I had built. More importantly for this post, Lambda and DynamoDB are both “pay by the drink”. With Amazon, those are very small drinks:

  • The base unit for Lambda is 50ms. A typical call from the front-end of my app to the backend will result in a log line that reads: “REPORT RequestId: 6419f5ca-f747-4b77-a311-09392fc6bcc3 Duration: 148.03 ms Billed Duration: 150 ms Memory Size: 256 MB Max Memory Used: 151 MB”. AWS charges (past the free tier… which is a nice feature, but not the focus here) $0.0000002 per request and $0.0000166667 per GB/s. For 10,000 calls similar to the one above, I’d be charged $0.002 for the number of calls $.0064 for the memory consumed. We do need to remember that in a micro-service architecture, there are a lot of calls; some actions will result in 5 or 6 Lambda services to run. However, based on the numbers above… if I end up with only a handful of users, my Lambda charges will be negligible.
  • For DynamoDB, the lower end of the scale is similarly impressive. Charging $1.25 for a million writes, $0.25 for a million reads, and $0.02 per 100,000 DynamoDB Stream Reads (more on these in another post). I know from last seasons that if I only have a couple of leagues then, after refactoring for a big data architecture, I will end up with ~5,000 bets to keep track of that are rarely ever read (there are cached totals) but often get written and rewritten (let’s say 5 times per bet), ~300 games that are read every time a user loads their bet page (lets say 12,500 times players read all 300 games), and ~25 player/league records that are used on nearly every call from the UI (let’s say users are queried 50,000 times). If I use those conservative guesses for usage, a small season would cost me $0.036 for the writes and the resulting DynamoDB Stream Reads and $0.95 to satisfy all the reads of the games and leagues. That means my relatively small league is costing me less than $1.00 for the whole season.

The reason I titled this post “Serverless is Commitmentless” is that I can build an app and host it on redundant compute and storage without really paying anything at all. If I get lucky though, and the application is a huge success, this architecture would scale to thousands of leagues before I need to rethink my DynamoDB indexes. As long as my revenue goes up faster than the AWS hosting fees when that happens, I have zero upside or downside risk from infrastructure cost when starting a new project.

Book Review: Algorithms To Live By

This book has a clever punch line… “What can we as humans learn from computers about how to approach problems that face us in every day life? Get the answer from a psychologist and a computer scientist.” A clever side effect is explaining some of the more interesting concepts in computer science (search, caching, communicating over an imperfect network, etc…) in the lens of every day life. The result is a book that not only offers clever antictdotes on every day life, it also educates us on concepts that effect computers, networks, and computer science.

Some of the interesting points that we can learn from computer science and apply to every day life:

  1. I think their overall point is that there are a great deal of problems that, even with a super computer, cannot be solved unequivocally. The key is to pick an algorithm that, within an appropriate amount of time, has the best chance of giving you the best answer. Once you have confidence you’ve done that, live without regret. The authors quote Bertrand Russell on this, “it would seem we must take account of probability in judging of objective rightness… The objectively right act is the one which will probably be most fortunate. I shall define this as the wisest act.” We can hope to be fortunate- but we should strive to be wise. There are a bunch of examples that fall in to this, but my favorite is:
    • If you are approaching a problem where you have 2 months to find an apartment, you don’t feel you can really compare two apartments without seeing both of them, and you run a very high risk of losing an apartment if you don’t make an offer the minute you see it. Then it is optimal to spend 37% of your time looking but unwilling to commit, and 63% of your time ready to pounce if you find the right place. There’s still only a 37% chance you actually end up in the best place you could have with that… but you will have done the “wise” thing.
  2. There’s a great little lesson on exponential backoff that I loved. Essentially, exponential backoff is a practice of making repeated failures drastically reduce the frequency with which you make an attempt. It’s common in computer science because computers can be so persistent if told to just retry infinitely. My desktop application, for example, may try to communicate with the server and fail. It then retries immediately (with a computer that a millisecond later). If both fail it will wait a whole second (perhaps even doing some other task in the mean time) before trying again. Then wait longer and longer between attempts because each time it fails the next time is less likely. I had never really thought about it, but while exponential backoff is common in computer science, people much more commonly give something a certain number of tries in rapid succession and then give up forever. The authors give this example:
    • “A friend of ours recently mused about a childhood companion who had a disconcerting habit of flaking on social plans. What to do? Deciding once and for all that she’d finally had enough and giving up entirely on the relationship seemed arbitrary and severe, but continuing to persist in perpetual rescheduling seemed naive, liable to leed to an endless amount of disappointment and wasted time. Solution: Exponential Backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of retry goes toward zero-yet you never have to give up entirely” [maybe they were just going through something personal].
  3. They talk about how computationally intensive it is to do a full sort. If you’re a computer nerd… you’ll love that they actually use big O notation to do so!!! While sorting makes searching easier, you have to do a LOT of searching to justify a full sort of a large set. Consequently they recommend a life filled with bucket sorts, where you just put things that are categorically alike together and then search through the categorical subset when you need a specific item. As a slob that treats his inbox as a steady stream… this seems to justify my actions!
  4. There’s discussion of caching and different algorithms that computers use to decide what information to keep “close at hand” and what to file away for slow retrieval if/when needed; then brilliantly analogizes that to organizing your closet, filing cabinet, and house. That also justifies a certain amount of apparent disorganization!
  5. They end up basically justifying agile product/project management by pointing out the level of complexity in predicting things that are years away and have increasingly high numbers of factors influencing them.

As I said though, it’s not all about learning life lessons. If you don’t know much about computers you can expect to learn how the internet communicates messages over unreliable wires (or even thin air), how computers calculate the route between two locations, and the magic of Big O notation.

Clever Idea – Invitations with Angular and OAuth

I’ve decided to start a bit of a series of blog posts called “Clever Idea”. I’ll use the space to talk about something I’m working on either in my hobby project (an app for picking Football games with your friends) or at work that I think is clever. The intent will partly be to share a technique or technology that I think some people reading might be interested in. I’ll also be hoping that occasionally someone points out to me that I’m not as clever as I thought and there’s actually a better way to accomplish what I’m trying to accomplish.

The Problem

As some of you know (and I’ll be posting about more), I have run a little app for picking football games the last 4 or 5 years. It allows a “league” of people to keep score on who does the best picking games against the spread. This year I am making an improvement to allow users who are already in the league to form new leagues. A user is asked to specify the email addresses of everyone they’d like to participate in the league. If you specify someone who is already in the league, then they are added to your list of players. The first problem comes in when you want to invite someone who isn’t currently enrolled in the system.

I use Google OAuth for authentication. So I will need to “pair” the identity that an existing user has specified as wanting to be in their league with a logged in Google ID. Note that I feel this is necessary because not everyone uses the email address associated with their Google ID as their primary email address, so I would not want to make it impossible to join unless the person inviting you happened to guess the right email. One of my friends gave me the great suggestion of providing users a link. If I send an email to the new player and say they’ve been invited to play in a league, I can include a link with a query string parameter that includes their “user id” (a random GUID created by my system). This is where things get more difficult. Unfortunately (in this case), I’m using Angular for my frontend. Angular creates a downloadable browser app that players in the league use to interact with the system and that keeps itself refreshed through backend API calls. I’m a huge fan of Angular, BUT it makes my “pairing link” impossible.

Angular is a fully downloaded app that doesn’t actually go fetch other pages when the URL changes (these are handled client-side through the Angular “router”). Consequently, I can’t send someone directly to the “pairing” section of my app because the URL doesn’t “exist” on the web. In their deployment instructions, Angular themselves recommend forcing all incoming traffic to a specific URL via your app server. Seemingly this leaves me with no choice but to use an unfriendly system where I force users to land at my strange website for the first time, trust it enough to “login” even though they aren’t yet users, and paste their “pairing ID” (the GUID that represents them in my system) in to the system so it recognizes who they are.

The Clever Idea

Once you hear the idea, it seems obvious. However, I had fully coded the crappy user experience mentioned above before I came up with it… Just build a simple webpage outside of your Angular deployment artifacts, but located alongside your app. This means that Google will recognize the host as white listed when someone tries to login, your users will trust it as your site, AND you can send people to a specific page with query string parameters. Below is the simple HTML/javascript page that I setup for the purpose (you can find it here on git):

<html lang="en">
  <meta name="google-signin-scope" content="profile email">
  <meta name="google-signin-client_id" content="<id>">
  <script src="" async defer></script>
<h1>Welcome to the New Player Page for LTHOI</h1>
To pair, please follow these steps:
  <li>If you don't want to play Leave the House Out of It, do NOT fill in this form.  Your invitation will expire and the person who invited you will be alerted.  You will be removed from our database entirely.</li>
  <li>Signin to your Google Account.
    <div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div></li>
  <li>Fill in your first name and last initial.<br>
    <input type="text" id="fn" value="Your First Name"><br>
    <input type="text" id="li" value="Your last initial"><br>
    <button align="center" onclick="onSubmit()">Accept Invitation to Play Leave The House Out of It</button></li>
<p id="error"></p>
  var access_token="Unregistered";
  function onSignIn(googleUser)
    // The ID token you need to pass to your backend:
    access_token = googleUser.getAuthResponse().access_token;
  function onSubmit()
    // Get the user_id from the QSP to pass
    var urlParams = new URLSearchParams(;
    var uid = urlParams.get('uid');
    var request = new XMLHttpRequest();'POST', ('<BackEnd>/bff/pair?uid=' + uid), true);
    request.setRequestHeader('googleToken', access_token);
    try {
      request.send('{ "first_name" : "' + document.getElementById("fn").value + '", "last_initial" : "' + document.getElementById("li").value + '" }');
      if (request.status != 200)
        document.getElementById("error").innerText = 'That did not work.  Are you sure you were invited?  logged in?';
        window.location.replace("<dev location of LTHOI so they can login>");
    catch (err)
      document.getElementById("error").innerText = 'That did not work.  Are you sure you were invited?  logged in?';

Book Review: The Age of Surveillance Capitalism

I had extremely high hopes for this book after listening to an interview with the author. Unfortunately, most of what I liked about the book I heard in that 20 minute interview. I believe that the core theme of this book is a topic that will shape the next 30 years of American life. Unfortunately the vast majority of the book is spent detailing (and at times exaggerating) how evil the current system is instead of charting a course for correction without simply stopping progress. With that in mind, I’m going to offer in this review a few things I like and then discuss a few places where she’s gone too far or failed to cover a topic.

I love three aspects of the way the problem is framed:

  1. I think it is brilliant that Zuboff connected Google’s (and then the rest of the Internet’s) switch from a “pay per impression” to “pay per click” model. This means that Google is taking responsibility not only for displaying someone’s ad… but getting the user to click on it. This incentivizes them to be actively matching customers and products. This was the first step on to our present slippery slope.
  2. I love the comparison of being able to commoditize and sell the ability to predict what we will click on to the ability to commoditize and sell labor which began in the industrial revolution. Now, instead of needing 25 cobblers to make enough shoes for my city, I needed 25 laborers to operate the shoe factory. This gave too much power to the capitalists who owned the factory and ended up resulting in unions (and eventually safety and wage related regulations).
  3. Zuboff points out on page 192 that “Demanding Privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the internet is like asking Henry Ford to make each Model T by hand or asking a giraffe to shorten its neck. Such demands are existential threats.” I agree, progress/evolution cannot be simply reversed.

My problem with the book is how evil she claims the surveillance capitalists already are. Though I credit Zuboff with the comparison to the labor market (#2 above) she spends far more time comparing it to the way totalitarian leaders take advantage of their people or the way Europeans took advantage of Native Americans. Zuboff uses hundreds of pages trying to analogize surveillance capitalism to various evil periods in history, leaving very little of the book for the laws, regulations, social protests, etc… that might be necessary to cause a course correction. Perhaps she’s already planning a sequel?

Overall, I think the book is a worthwhile read because of how it brilliantly identifies one of the biggest problems of our time. Even if you will have to glaze over the sections that over-elaborate problem and deal with the disappointment of a lack of resolutions.

Book Review: Who Can You Trust? by Rachel Botsman

I was inspired to read this book by an interview with Botsman on a podcast I listen to. It’s based on a compelling overall narrative… how have we reached a point where no one trusts the President of the United States, but people agree to stay in spare rooms of complete strangers on AirBnB and will literally put their life in the hands of a self-driving car? Botsman came to fame writing about the sharing economy, so it makes sense to hear her perspective on the topic.

The book isn’t as bold as I would have liked in predicting the future. In books like these, I prefer to hear an experts opinion on what is likely to happen, whether it is good or not, and what that means we could/should be aware of. Botsman prefers to focus only on the third of these topics; the dangerous trends we are working towards. That said, she does offer a couple compelling cliffs that we appear to be confidently striding towards:

  1. Not Evaluating Bot’s Motives – Botsman weaves together stories of her daughter interacting with Alexa and scientific studies to show that as people become more comfortable with technology, we are too trustworthy of its “answers”. Her concern is that these bots that we “trust” are not there as a public service, but to influence us in to certain behaviors.
  2. Trusting Technology Before It’s Ready – Botsman also points out that there are a LOT of areas that artificial intelligence and algorithms have not figured out how to do well, but we seem to be willing to trust them without testing them. There are several good academic studies where this behavior is exhibited.
  3. Algorithmic Trust (Blockchain) Can Be Manipulated – Botsman covers the Ethereum DAO hack very well. If you don’t know much about Block Chain you’ll be able to follow it. She points out two concerns with the hack. First, that the hacker didn’t really “break” the system, just found a way to use it that was unintended… imagine if Ethereum had become an international currency before that happened. Second, that a relatively small group of people decided to effectively “undo” the change. In this case they were doing “good”, but it still served to demonstrate that Blockchains can be manipulated.
  4. Reputation Based Trust Getting Out of Hand – The topic of the Social Credit Score in China is covered in more detail than I’ve ever read before and it is terrifying! It’s essentially a credit score system that doesn’t just evaluate your wealth and default history, but you friends, your political leanings, the places you’ve gone, your grades in school, your hobbies, and more. Essentially this will give the government access to incentivize all manner of behavior. The consequences will be real too, “people with low ratings will have slower internet connectivity; restricted access to more desirable night clubs, restaurants, and golf courses. To quote a government document, “allow the trustworthy to roam every where under heaven while making it hard for the discredited to take a single step.” While it is easy to say, that’s just China… we already evaluate people based on the number of followers they have, little blue check marks, star ratings on AirBnB or Uber, etc…

Overall, I recommend the book, but only if you are interested in these topics and would like a little hand book of the current state of them. I also don’t think this book will age well as these topics are evolving quickly. If you happen to come across this blog post after 2019… I doubt I’d still recommend it.

Fantasy Baseball WAR

Whether you care about baseball or not. Whether you wanted to hear about it or not. If you’ve had more than three conversations with me in real life, I have told you that one of my favorite things is the fact that in baseball a player’s “Wins Above Replacement” (WAR) can be calculated. Statisticians can say, with a fairly high level of precision, how valuable (in terms of wins) a particular player is to a baseball team over a season. Imagine being able to calculate your colleagues value to the organization with math!

One of the problems with WAR though, is that it does not translate cleanly to fantasy baseball. There’s definitely a correlation between WAR and fantasy value, but it is far from perfect. So, this season, in preparation for my Fantasy Baseball draft, I set out to calculate how valuable all the potential players in Fantasy Baseball were. Settle in, this post is going to be long and worth it. I’ll start with a really quick overview of WAR in real baseball, then discuss the major sections of it that I had to change for fantasy baseball, finally I’ll talk about how the results were somewhat surprising.

WAR in “Real” Baseball.

At a high level, to calculate a player’s WAR we figure out the number of runs a player can be expected to add to a team with his bat/base running and the runs that he can prevent with his glove or pitching arm. We then divide that number by an approximation of the number of runs that it takes to win a game. Finally, we subtract the number of wins that a “replacement” player could have achieved. This is done by looking at the runs contributed/saved by the best player who would typically be available on waivers at that position.

Projection Systems

In order to calculate a players value for this upcoming season, we will need to get a reasonable guess of how they’ll do statistically in 2019. There are a lot of projection services, but I will be using Steamer because I can get it for free on Fangraphs. Steamer doesn’t post their exact algorithm, but I suspect that like most systems, they project a player’s statistics by assuming they will progress in a similar manner to historical players who had similar statistics at their age. So, for example, if you want to figure out how Andrew McCutchen will do this year. You look for players that have had similar careers to this point (if you’re curious… Andre Dawson, Dave Winfield, and Vernon Wells are the most similar) and then assume McCutchen’s age 32 season will be similar to what theirs was. Interestingly, Steamer was developed by a High School math teacher and two of his students in 2008 and has consistently been as accurate or more accurate than systems developed by high profile sports shops that charge for the data (ESPN, Baseball Prospectus).

Calculating “Wins” in Fantasy Baseball

If we’re going to calculate Fantasy WAR, the first thing we have to do is determine what a “win” is. Clearly, in fantasy baseball, runs scored and runs prevented don’t win games. In my fantasy baseball format, you play head to head each week against another player, but there isn’t just one winner. You effectively play 6 offensive games each week against one opponent, one for each statistical category (TB, HR, R, RBI, SB, OBP). For example, if my team has more home runs, total bases, and RBI but my opponents team has more stolen bases, runs, and a higher OBP than we each win three “games”. It’s those wins that matter at the end of the season for playoff seeding. For this reason, the “wins” that I’m counting is winning a single statistical category.

The next question is, how do I take a players projected home runs for the season and turn it in to fantasy wins in the home run category. What I need for that, is to be able to compare the player’s numbers to the average number of home runs a fantasy team will have per game in 2019. In my league their are a total of 120 players playing in each game (there are 12 teams and each team has 10 starting position players). It’s reasonable to assume that the 120 players playing will be the top 120 players according to fantasy drafts that have already been held in 2019 (this data is available in the Fangraphs data). I averaged the projected home runs that will be hit by those 120 players over the season. Since there are ~25 weeks in the season, I divided that number by 25 to get the average number of home runs per player per game in my league. I then assumed that a “win” in Home Runs can be achieved by beating one team (10 players) of those average players. For my league that means that 22.364 home runs is a “win” in home runs.

Calculating “Replacement” in Fantasy Baseball

In addition to knowing how many wins a player is worth, I also have to figure out how many wins a replacement player is worth. My league has 6 bench spots and 2 DL spots; I have assumed that teams will, on average, use half of those for position players. As I mentioned above, there are 12 teams in my league and 10 position player starters. That means that at any moment each team has spoken for 14 position players and the league has spoken for 168 total. Using that information, I have assumed that an available “replacement” player for fantasy purposes is the 169th best position player in any particular statistical category. So, for example, in HRs, the 169th best available player is Roland Guzman of the Rangers who is projected to hit 13 HRs (or .520 HR per fantasy baseball game).

Example of How it Comes Together to Get Fantasy WAR

Let’s take a look at Gincarlo Stanton’s WAR for just HRs. He is projected by Steamer to lead baseball with 45 HRs. As you saw above in the section on calculating replacement value, our replacement home run hitter is Roland Guzman and he will hit 13 HRs. Gincarlo can then be projected to hit 32 HRs above replacement. In the section on “Calculating Wins” we learned that 22.364 HRs should be enough to win the average game in my league. That means that Gincarlo is worth 32 HRs are worth an extra 1.43 wins for whatever team has him.

Comparing The Results to Average Draft Position

The top 20 players in my Fantasy Baseball League

If you’re the curious type and have made it this far, you can find my calculations here. I have pasted the top 20 players by Fantasy WAR above. The column “P ADP” indicates what order they are selected in during Fantasy Drafts that have already happened this year. There are a few things that jump off the page:

  1. Stolen Bases are important. You’ll notice that no one whose projected to get less than 10 stolen bases even made the top 20 and that virtually all of the speedsters are towards the top. The reason this is true is because of how top-heavy the stolen base category is. It only takes 7 stolen bases to win a typical week, so Trea Turner’s 41 steals project to be worth almost 6 wins over the course of a season. Remember Giancarlo Stanton’s 45 HRs were only worth 1.43 wins because it would take 22 HRs to win a game. Be careful with this information though (see deficiency #3 below).
  2. Steamer likes a few of these guys better than people drafting leagues seem to. Roughned Odor and Yasiel Puig in particular show up as much more valuable than you’d expect (most of the rest of the big differences are explained by the stolen bases).
  3. Christian Yelich looks to be the most over-valued player in the top 20, with people selecting him 5th even though his value looks to be 13th. The most over valued player overall is J.D. Martinez who ranks 33rd in Fantasy WAR, but is being drafted 4th among position players.

Deficiencies In Fantasy WAR

Please do not use this list as your draft cheat sheet and then yell at me when your team is terrible. There are several key things that Fantasy WAR and/or my calculation of it are bad predictors of:

  1. Billy Beane is famous for his quote, “My shit doesn’t work in the playoffs”. Neither would Fantasy WAR. A team selected based only on Fantasy WAR would be designed to beat the average team. It would be likely to rack up wins in the regular season, but may not stack up well against a team that has several categories very well covered. I would expect a team built on Fantasy WAR to do really well at racking up wins against bad teams but may lose 7-5 in the playoffs without making any of the 7 particularly close.
  2. I ignored positions for this calculation. There were a number of reasons I chose to do this; the two utility players, the daily switching of lineups, the fact that center fielders aren’t treated separately. That said, obviously the catcher position as well as probably middle infield spots need to be given special attention. Unless I missed something, the most valuable catcher is JT Realmuto at 113th!
  3. The values listed here are only really useful for your first pick. Once you already have Trea Turner and his 41 steals on your team, Mondesi isn’t nearly as likely to win you games. For this purpose, I intend to make a version of this spreadsheet for my draft where I can ignore certain categories once I have “enough” of them on the team.

If you look through this and spot something you think I missed, feel free to comment.

New Year’s Resolutions 2019

2019 is starting off in a great place; I’m very happy with the friendships/relationships that I’ve formed in New York over the last couple years (and the ones I’ve maintained via distance), I have a chance to work on a really interesting project at work, and I’m in reasonably good physical shape.  The past few years there have often been “low hanging fruits” in my life that desperately needed attention and where I could quickly get very rewarding progress (e.g. having let my weight go). This year I feel the need to carefully select what I want to improve because it will take a lot of work to get significantly better than where I am.

I ended up with three distinct goals for 2019 that can help take me to an even better place:

  1. Enhance My Just For Fun Project – For the last 4 years I have run a website where my friends can pick NFL Football games (sort of like Fantasy Football). I usually use it as a way to learn the technologies that my team at work has been using at a hands-on level even though I don’t have time to keep up with all of their progress on a line-by-line basis. This year my team is working with such cool technologies that I’ve really learned a lot and built a product that’s pretty good. I’d like to spend the off-season making it robust enough that I can open it up to users who aren’t just my friends. This will require some functional enhancements; a new user interface, the ability to create and administer leagues without messing with the database, a few other odds and ends. Most of the changes though will be creating better DevOps/Testing/Monitoring/Logging so that there are less disruptions. My friends are pretty forgiving with outages and dumb mistakes… but I wouldn’t expect everyone to be.
  2. Get a Sampling of More Cloud Technologies – My current project has me very focused on Kubernetes on premise. I think I would benefit from a more well-rounded technical background so I can help influence other decisions. With that in mind, I’m going to be working on getting a few AWS certifications and probably a GCP one.
  3. Get Back in to Non-Technical Reading – My career has gotten a lot more technical in the last 2 years than it was before. My focus on DevOps has only required some of the highest level knowledge of what’s going on in banking, best practices in development/agile, and management techniques. I’ll be looking to modify my regular reading, get through a couple of books that talk about the industry at a much higher level, maybe even get certified in SAFe so that I still feel as at home in the developer’s scrum as I do in the devops scrum.

In addition to those 3, I want to make sure I don’t regress on a few areas of my life that are going well. Most notably continuing to nurture my relationships with people and stay in reasonably good shape. I have a few ways to measure those, but the goals are not remarkable.

The Blog of Burgher Jon Returns!

I’m sure the nearly no one who almost never read it will rejoice.  It’s not a cgrand re-entry; I’m as ambivelant as ever about how many people read or enjoy it.  It’s back online because I have missed having a place for long form expression. I enjoy thinking through something long enough to have a coherent thought on it and I have found the best way to be sure those thoughts are coherent is to put them somewhere where someone might read them.

If you’re curious… the old blog posts, all 1050 or so of them, are gone forever.  Around this time last year I was cleaning up my AWS account and inadvertently deleted the instance that had my blog on it.  Any of you that have worked with me professionally will appreciate the humor in the fact that I, for a brief moment, was forced to recognize the importance of segregation of duties!