Book Review: Prediction Machines

If you’re going to be working in technology for more than 10 years, you should read this book. It’s written by a set of economists associated with the Creative Destruction Lab at the University of Toronto. As you might imagine from an incubator with that name, they have made many investments in Machine Learning companies. The observations in the book are less about how machine learning works (though there is obviously some introduction required) and more about the implications for the wider economy.

As the title suggests, they believe that the current advancements in artificial “intelligence” are not related to general intelligence, but are making advancements in computers’ ability to distill lots of data about what humans have done to make a prediction. For example, Google’s image recognition doesn’t actually “recognize” images, it is only able to say, “based on other sets of pixels that people have identified as tables, I think that there is a 98% chance that is a photo of a table.” This key assumption then allows them to predict how further advancements in these prediction machines will impact the kinds of companies, jobs, social programs, etc… that will go up (or down) in economic value.

As they put it, “Economics offers clear insights regarding the business implications of cheaper prediction. Prediction machines will be used for traditional prediction tasks (inventory and demand forecasting) and new problems (like navigation and translation). The drop in the cost of prediction will impact the value of other things, increasing the value of complements (data, judgement, and action) and diminishing the value of substitutes (human prediction).”

The remainder of the book explores logical deductions from that fact. Some of the more interesting of which are:

  • Some areas of prediction only become real game changers when the machines reach near 100% certainty. For example, imagine if when Amazon made a recommendation they could be 98% sure that you’d want to buy the product… would they send automatically send the product to a distribution center near your house? Maybe just send it straight to you and let you return one product in fifty that they’re actually wrong about.
  • The value of data is going up quickly.
  • Machines and humans have distinct strengths… humans tend not to be thorough while machines can’t discover independently a new type of data that should be incorporated. The two will be paired with machines doing the prediction and humans doing the judgement.
  • They identify 6 types of risks associated with Prediction Machines. The most interesting of which is that on things like driving an airplane, humans are getting less and less skilled at it while auto-pilot does more.

“Book” Review: Beyond the Phoenix Project

This “book” is actually a set of conversations recorded by John Willis and Gene Kim. The main thing I can offer by way of advice is; absolutely do NOT buy a printed version, this isn’t really a book. The book is apparently a inspired by a similar “Beyond the Goal” lecture done by Eliyahu Goldratt who authored “The Goal” the novel about Lean that inspired Kim to write a novel to explain DevOps. I highly recommend getting the book, even if you only listen to the modules on Lean and Safety Culture.

The first two chapters are not the most interesting. They cover Goldratt and Deming. They aren’t uninteresting, but it was more detail than I needed on the individuals.

The next two chapters are fantastic and go through a lot of the concepts that they borrowed from when creating DevOps. Goldratt and Deming as the people who they borrowed from most thoroughly when coming up with some of the DevOps concepts. They then turn to Lean and Safety Culture, covering the history of each discipline and the most important parts that were borrowed. I learned a TON from these sections. I particularly enjoyed the section on the Andon Chord (which is intended to be pulled on a production line if anything goes wrong). I had of course glanced at the concept before, but had always dismissed it as something that was only really useful for the most mature organizations. Kim and Willis exposed me to how Toyota actually deals with it. Learning that they actually panic when it’s not pulled enough and that not every pull stops everything, it’s more of an escalation. The comparisons to a pipeline are obvious. The next chapter is actually a recording of a conversation between leaders in Lean, Safety Culture, and DevOps there were a few good nuggets, but I generally would say you can skip it.

The only remaining module that I found interesting was the case studies. Particularly the stories about Target and Nordstrom. It also closes with an interesting observation that Willis had with the CFO of Goldman Sachs. Apparently he is tracking roll outs of DevOps technologies specifically by name (Kafka is mentioned specifically). It says something about critical agility is becoming, independent of the business functionality that’s being built on top of it.

My Next Move: IBM “Journey to the Cloud”

I’m really excited to announce that I’ve taken a job with IBM, in their advisory cloud consulting service. The team I’ll be leading will be looking at how organizations can architect internal and external cloud solutions on both IBM/RedHat products/clouds as well as on other external providers like AWS/Azure/GCP. I will be based out of New York City, but have responsibility across all of North America. I’m excited about this opportunity both because I think IBM is well positioned to be successful and because I think this job is a great fit for me personally. I’ll use the balance of this post to give you two reasons for each.

I am excited that IBM has a chance to be the preferred provider for people who care deeply about their internal/hybrid cloud. With the acquisition of RedHat and its suite of tools for containers/kubernetes and devops, IBM has more capability than the leading external cloud providers to help you build your internal cloud and then reach out. I believe this will be important to companies that have legacy applications that are not cloud native (and therefore run inefficiently in a cloud pricing model), have large investments in their datacenters and enough demand to cost-effectively use them, and/or have applications that for compliance/security reasons they’re not comfortable running from the public cloud.

I am also excited that IBM can be a trusted advisor to the “second wave” of companies moving to the cloud. The first wave of cloud computing was dominated by the Netflix, Facebook/Instagram, Uber, Twitter, etc… unicorns. They still represent a huge portion of the current public cloud resource consumption. These first-wave companies are, primarily, software engineering companies with no legacy software. They have been able to capitalize on cloud models because they could invest in the engineering to do so with only a few applications that serve billions of people. For companies in this second wave (I’m mostly familiar with Financial Services… but much of the Fortune 500 is with them), the cloud is more challenging. It requires deciding when to re-platform an application that only has 2 developers supporting it. It requires retraining/retooling a development staff (or hiring one and teaching it the intricacies of a business that’s more sophisticated than a tweet). It requires replacing legacy infrastructure services/products with what the new cloud-based applications require. These second-wave companies have relied on IBM for decades for their infrastructure, middleware, etc… and I am happy to be part of the group that will let them rely on us for cloud too.

Personally, this gives me a chance to stay in New York City! I have started to fall in love with this city and the friends I have met here. If you’ve known me for a while you know that I’ve moved a lot for jobs in my career; in fact, this will be the first time that I have taken a job in the same city as the previous one. I’m looking forward to putting down some roots and really enjoying all that the Big Apple has to offer (all while getting to travel a lot!).

Finally, I’m also excited to be back in consulting. It gives me a chance to do what I do best, identifying opportunities. While I think my last two jobs (standing up Kubernetes at RBC and completing a large scale software development project at Fannie Mae) have shown I can excel at implementing/operationalizing; I am excited to get back to what I think I do best, helping people identify opportunities and proving out their value. I’m sure in a few months when I hear that a client has dropped the ball on a project I proposed, I will remember the brown grass on the other side of this hill. For now though, I am really excited to get to know lots of clients and see what they’re doing with the cloud!

If you’ve made it this far, I thank you for taking an interest in what’s going on with me. Hopefully it’s because you think we can collaborate in the new position? My group will be working with all of the major external cloud providers and we’re going to be agressively hiring engineers, developers, architects, and consultants who can help IBM’s clients with their “Journey to the Cloud”. If you’re interested in seeing how we can partner, reach out to me on LinkedIn.

Clever Idea: Leave The House Out of It (my app)

One of my favorite experiences has always been going to Las Vegas on an NFL Sunday.  Looking at all of the potential wagers, reading up on all of the games, talking with my friends about what they see, and placing a few bets while drinking that stale coffee at an abandoned Blackjack table is a great way to start a day.  The best part of any weekend in Vegas though, is sitting at the little bar outside the sportsbook at the Venetian (the one where you can get free drinks by occasionally playing $1 Video Poker) and watching 14 games over 12 hours on a dozen big screen TVs.  I try to make it out a time or two per season to risk a couple hundred dollars.

Unfortunately, as a software developer and lover of discrete math… I know just how stupid this is mathematically.  The longer you gamble on anything against the house (the gambler’s term for the casino), the less likely you are to be ahead.  Casinos use a “spread” to make sure they’re getting more than their fair share of every bet. The easiest way to illustrate this is to look at a bet that is perfectly even, like the flip of a coin.  The way the House prices bets, if you wanted to win $100 on “Heads” you would have to bet $110; same with tails, risk $110 to win $100. While you might win this once or twice, the House will eventually catch up with you.  That’s why I only make Football bets in Vegas, I only do it a couple times a year, and I never bet much.

Over the last few years though, I’ve found better and better ways to enjoy many Sundays the same way I enjoy them in Vegas (although I trade the Venetian for my local sports bar with Sunday Ticket) and without paying a tax to the House.  It started 6 years ago when my brother and I hit upon the idea that each of us could pick a few NFL games for a friendly wager and the other person had to take the “House” side, but couldn’t charge the tax. This way as long as I picked better than he did, I would end up winning money.  This allowed us to watch the games together and enjoy them like we were at the casino, but without the need to fly to Vegas and pay the casinos (worst case scenario, I lost money and was paying my brother).

Four years ago the idea grew again, we wanted to find a way to reduce the record keeping burden and include more of our friends.  That year, the first version of Leave The Houst Out of It (LTHOI.com) was born. The idea behind it is a simple extension of what we’d been doing for a couple of years.  We would build a “League” of friends and when any one of them wanted to pick a particular game, we would all take a tiny position against them.  Having a lot of people playing with us made the system work even better; now when someone takes a bet that I am forced to take the other side of, I am sharing the responsibility with all the other people in the league.  Over time, these “house positions” tend to even out such that we are truly “Leaving the House Out of It”.

We soon discovered that this is an idea that doesn’t even require that money be involved.  With the website version, instead of actual gambling… my friends and I discovered that we can just play for pride.  Our Win-Loss record and earning against each other are tracked alongside the rest of the members of the “league” and we can battle for who has the best record.  Now I get all of the thrill of the Venetian with none of the risk!

Book Review: War, Peace, & IT by Mark Schwartz

The book bills itself as being for the CEO that’s looking to make sense of how the changes in IT should change the way they view the CIO and IT. It mostly lives up to that, though you’ll need to have some knowledge of agile/cloud. As a technologist, I also found it valuable to hear the language and examples the author uses because I think they’ll be valuable as I talk to development managers and infrastructure executives who are trying to figure out how to sell the agile/devops/product/team approach to a business that is used to thinking about “keep the lights on”/multi-year initiatives/chargeback. Overall, I’d recommend the book as a quick read. Mostly to remind you of things you already know/feel and to give you words that’ll help you say them.

A few things I liked in particular:

  • One of the primary points that he makes is that the world is changing so quickly right now that the biggest risk facing most companies is that they won’t be able to change fast enough. With that in mind he argues that DevOps and Cloud are actually investments in Risk Management. I find this powerful for two reasons:
    1. In many companies it’s the people carrying the title “Risk Manager” who want to stop things like deployments during business hours and hosting data outside the data center. This points out that (often, not always) those risk managers are protecting the company from the wrong/lesser risk of small outages at the expense of the big risk (not being able to innovate as fast as your competitors).
    2. It helps justify the cost of some of the DevOps Transformations and Cloud Transformations that need to happen. Often these are hard to justify when they’re compared with the opportunity to deliver on business requirements or the opportunity to use automation to save money in infrastructure. Framing DevOps as a risk management play, helps explain why there’s a need to invest in making the company better able to change.
  • He actively fights the urge to make “IT as a Business”. Arguing that it needs to be part of the company like finance or marketing. He, rightly, points out that in most companies IT is expected to operate like a sort of sub-contractor but not given the ability to hire freely and set salaries optimally, IT can’t choose to avoid business customers that become untenable, it can’t charge more for higher demand items instead of passing on costs (at least in most chargeback models I’ve seen). Additionally, making IT a sub-contractor means adding red tape to the change process. IT is inscentivized to offer standard products, to require the business “pay” for changing requirements, etc…
  • He uses one of my favorite business people analogies about agile software development vs project based development. That it is like buying options. Business people understand the risk of buying assets vs buying options… most even remember that moment of surprise in business school when you realized just how different the price is between an option and an asset. Agile software development is like that. You can build an MVP for the smallest slice of the market you can think of and then wait and see if they like it before agreeing to pay for the rest of the project.

Clever Idea: Serverless is Commitmentless

I’ve decided to start a bit of a series of blog posts called “Clever Idea”. I’ll use the space to talk about something I’m working on either in my hobby project (an app for picking Football games with your friends) or at work that I think is clever. The intent will partly be to share a technique or technology that I think some people reading might be interested in. I’ll also be hoping that occasionally someone points out to me that I’m not as clever as I thought and there’s actually a better way to accomplish what I’m trying to accomplish.

This season will be the first season that I’m planning to open up my little app to more than just my immediate friends. I’m planning to let my friends invite their friends to form leagues and hopefully end up with a couple dozen leagues. Being somebody who has floated from infrastructure to software development and back again throughout my career, I know that this doesn’t just mean developing new screens for adding leagues and registering new players… This means, planning out my infrastructure so that the application can scale appropriately.

Like any good architect, I started with the usual questions… how many concurrent users do I think I will have? How many rows in the databases? etc… The problem here is, like most software projects, I don’t really know. It could be that no one will be interested in the app and I’ll only have a couple users. It could also be that sometime during the season it spreads like wildfire and I have a hundred leagues all of a sudden.

I have a micro-service based architecture. Last season it consisted primarily of containerized Spring Boot apps running on a EKS (Kubernetes) cluster and communicating with a relational database deployed on AWS RDS. This architecture is certainly scalable relative to the vast majority of monolithic applications that exist in enterprise IT today. I had an auto-scaling group setup to support the EKS cluster and it would scale down to two nodes and up as far as I needed. Without re-architecting the database, it probably could have scaled to several hundred leagues. It’s pretty flexible, allowing my AWS bill to run from ~$200/mo (a small RDS server, a couple small K8s application nodes, and the EKS control plane) to a cluster/db large enough to support a few hundred leagues with the only down-time being when I switched from smaller DB instances to larger ones.

It’s not nearly as flexible as Lambda / DynamoDB though. When I rebuilt the application this year it was with that flexibility specifically in mind. The app now runs entirely on Lambda services and stores data (including cached calculations) in DynamoDB. Both of these are considered serverless which means AWS ensures that my Lambda services always have a place to run and my DynamoDB data is always available, actually providing more reliability/availability than the K8s/Cluster architecture I had built. More importantly for this post, Lambda and DynamoDB are both “pay by the drink”. With Amazon, those are very small drinks:

  • The base unit for Lambda is 50ms. A typical call from the front-end of my app to the backend will result in a log line that reads: “REPORT RequestId: 6419f5ca-f747-4b77-a311-09392fc6bcc3 Duration: 148.03 ms Billed Duration: 150 ms Memory Size: 256 MB Max Memory Used: 151 MB”. AWS charges (past the free tier… which is a nice feature, but not the focus here) $0.0000002 per request and $0.0000166667 per GB/s. For 10,000 calls similar to the one above, I’d be charged $0.002 for the number of calls $.0064 for the memory consumed. We do need to remember that in a micro-service architecture, there are a lot of calls; some actions will result in 5 or 6 Lambda services to run. However, based on the numbers above… if I end up with only a handful of users, my Lambda charges will be negligible.
  • For DynamoDB, the lower end of the scale is similarly impressive. Charging $1.25 for a million writes, $0.25 for a million reads, and $0.02 per 100,000 DynamoDB Stream Reads (more on these in another post). I know from last seasons that if I only have a couple of leagues then, after refactoring for a big data architecture, I will end up with ~5,000 bets to keep track of that are rarely ever read (there are cached totals) but often get written and rewritten (let’s say 5 times per bet), ~300 games that are read every time a user loads their bet page (lets say 12,500 times players read all 300 games), and ~25 player/league records that are used on nearly every call from the UI (let’s say users are queried 50,000 times). If I use those conservative guesses for usage, a small season would cost me $0.036 for the writes and the resulting DynamoDB Stream Reads and $0.95 to satisfy all the reads of the games and leagues. That means my relatively small league is costing me less than $1.00 for the whole season.

The reason I titled this post “Serverless is Commitmentless” is that I can build an app and host it on redundant compute and storage without really paying anything at all. If I get lucky though, and the application is a huge success, this architecture would scale to thousands of leagues before I need to rethink my DynamoDB indexes. As long as my revenue goes up faster than the AWS hosting fees when that happens, I have zero upside or downside risk from infrastructure cost when starting a new project.

Book Review: Algorithms To Live By

This book has a clever punch line… “What can we as humans learn from computers about how to approach problems that face us in every day life? Get the answer from a psychologist and a computer scientist.” A clever side effect is explaining some of the more interesting concepts in computer science (search, caching, communicating over an imperfect network, etc…) in the lens of every day life. The result is a book that not only offers clever antictdotes on every day life, it also educates us on concepts that effect computers, networks, and computer science.

Some of the interesting points that we can learn from computer science and apply to every day life:

  1. I think their overall point is that there are a great deal of problems that, even with a super computer, cannot be solved unequivocally. The key is to pick an algorithm that, within an appropriate amount of time, has the best chance of giving you the best answer. Once you have confidence you’ve done that, live without regret. The authors quote Bertrand Russell on this, “it would seem we must take account of probability in judging of objective rightness… The objectively right act is the one which will probably be most fortunate. I shall define this as the wisest act.” We can hope to be fortunate- but we should strive to be wise. There are a bunch of examples that fall in to this, but my favorite is:
    • If you are approaching a problem where you have 2 months to find an apartment, you don’t feel you can really compare two apartments without seeing both of them, and you run a very high risk of losing an apartment if you don’t make an offer the minute you see it. Then it is optimal to spend 37% of your time looking but unwilling to commit, and 63% of your time ready to pounce if you find the right place. There’s still only a 37% chance you actually end up in the best place you could have with that… but you will have done the “wise” thing.
  2. There’s a great little lesson on exponential backoff that I loved. Essentially, exponential backoff is a practice of making repeated failures drastically reduce the frequency with which you make an attempt. It’s common in computer science because computers can be so persistent if told to just retry infinitely. My desktop application, for example, may try to communicate with the server and fail. It then retries immediately (with a computer that a millisecond later). If both fail it will wait a whole second (perhaps even doing some other task in the mean time) before trying again. Then wait longer and longer between attempts because each time it fails the next time is less likely. I had never really thought about it, but while exponential backoff is common in computer science, people much more commonly give something a certain number of tries in rapid succession and then give up forever. The authors give this example:
    • “A friend of ours recently mused about a childhood companion who had a disconcerting habit of flaking on social plans. What to do? Deciding once and for all that she’d finally had enough and giving up entirely on the relationship seemed arbitrary and severe, but continuing to persist in perpetual rescheduling seemed naive, liable to leed to an endless amount of disappointment and wasted time. Solution: Exponential Backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of retry goes toward zero-yet you never have to give up entirely” [maybe they were just going through something personal].
  3. They talk about how computationally intensive it is to do a full sort. If you’re a computer nerd… you’ll love that they actually use big O notation to do so!!! While sorting makes searching easier, you have to do a LOT of searching to justify a full sort of a large set. Consequently they recommend a life filled with bucket sorts, where you just put things that are categorically alike together and then search through the categorical subset when you need a specific item. As a slob that treats his inbox as a steady stream… this seems to justify my actions!
  4. There’s discussion of caching and different algorithms that computers use to decide what information to keep “close at hand” and what to file away for slow retrieval if/when needed; then brilliantly analogizes that to organizing your closet, filing cabinet, and house. That also justifies a certain amount of apparent disorganization!
  5. They end up basically justifying agile product/project management by pointing out the level of complexity in predicting things that are years away and have increasingly high numbers of factors influencing them.

As I said though, it’s not all about learning life lessons. If you don’t know much about computers you can expect to learn how the internet communicates messages over unreliable wires (or even thin air), how computers calculate the route between two locations, and the magic of Big O notation.

Clever Idea – Invitations with Angular and OAuth

I’ve decided to start a bit of a series of blog posts called “Clever Idea”. I’ll use the space to talk about something I’m working on either in my hobby project (an app for picking Football games with your friends) or at work that I think is clever. The intent will partly be to share a technique or technology that I think some people reading might be interested in. I’ll also be hoping that occasionally someone points out to me that I’m not as clever as I thought and there’s actually a better way to accomplish what I’m trying to accomplish.

The Problem

As some of you know (and I’ll be posting about more), I have run a little app for picking football games the last 4 or 5 years. It allows a “league” of people to keep score on who does the best picking games against the spread. This year I am making an improvement to allow users who are already in the league to form new leagues. A user is asked to specify the email addresses of everyone they’d like to participate in the league. If you specify someone who is already in the league, then they are added to your list of players. The first problem comes in when you want to invite someone who isn’t currently enrolled in the system.

I use Google OAuth for authentication. So I will need to “pair” the identity that an existing user has specified as wanting to be in their league with a logged in Google ID. Note that I feel this is necessary because not everyone uses the email address associated with their Google ID as their primary email address, so I would not want to make it impossible to join unless the person inviting you happened to guess the right email. One of my friends gave me the great suggestion of providing users a link. If I send an email to the new player and say they’ve been invited to play in a league, I can include a link with a query string parameter that includes their “user id” (a random GUID created by my system). This is where things get more difficult. Unfortunately (in this case), I’m using Angular for my frontend. Angular creates a downloadable browser app that players in the league use to interact with the system and that keeps itself refreshed through backend API calls. I’m a huge fan of Angular, BUT it makes my “pairing link” impossible.

Angular is a fully downloaded app that doesn’t actually go fetch other pages when the URL changes (these are handled client-side through the Angular “router”). Consequently, I can’t send someone directly to the “pairing” section of my app because the URL doesn’t “exist” on the web. In their deployment instructions, Angular themselves recommend forcing all incoming traffic to a specific URL via your app server. Seemingly this leaves me with no choice but to use an unfriendly system where I force users to land at my strange website for the first time, trust it enough to “login” even though they aren’t yet users, and paste their “pairing ID” (the GUID that represents them in my system) in to the system so it recognizes who they are.

The Clever Idea

Once you hear the idea, it seems obvious. However, I had fully coded the crappy user experience mentioned above before I came up with it… Just build a simple webpage outside of your Angular deployment artifacts, but located alongside your app. This means that Google will recognize the host as white listed when someone tries to login, your users will trust it as your site, AND you can send people to a specific page with query string parameters. Below is the simple HTML/javascript page that I setup for the purpose (you can find it here on git):

<html lang="en">
<head>
  <meta name="google-signin-scope" content="profile email">
  <meta name="google-signin-client_id" content="<id>.apps.googleusercontent.com">
  <script src="https://apis.google.com/js/platform.js" async defer></script>
</head>
<body>
<h1>Welcome to the New Player Page for LTHOI</h1>
To pair, please follow these steps:
<ol>
  <li>If you don't want to play Leave the House Out of It, do NOT fill in this form.  Your invitation will expire and the person who invited you will be alerted.  You will be removed from our database entirely.</li>
  <li>Signin to your Google Account.
    <div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div></li>
  <li>Fill in your first name and last initial.<br>
    <input type="text" id="fn" value="Your First Name"><br>
    <input type="text" id="li" value="Your last initial"><br>
    <button align="center" onclick="onSubmit()">Accept Invitation to Play Leave The House Out of It</button></li>
</ol>
<p id="error"></p>
<script>
  var access_token="Unregistered";
  function onSignIn(googleUser)
  {
    // The ID token you need to pass to your backend:
    access_token = googleUser.getAuthResponse().access_token;
  }
  function onSubmit()
  {
    // Get the user_id from the QSP to pass
    var urlParams = new URLSearchParams(window.location.search);
    var uid = urlParams.get('uid');
    var request = new XMLHttpRequest();
    request.open('POST', ('<BackEnd>/bff/pair?uid=' + uid), true);
    request.setRequestHeader('googleToken', access_token);
    try {
      request.send('{ "first_name" : "' + document.getElementById("fn").value + '", "last_initial" : "' + document.getElementById("li").value + '" }');
      if (request.status != 200)
      {
        document.getElementById("error").innerText = 'That did not work.  Are you sure you were invited?  logged in?';
      }
      else
      {
        window.location.replace("<dev location of LTHOI so they can login>");
      }
    }
    catch (err)
    {
      document.getElementById("error").innerText = 'That did not work.  Are you sure you were invited?  logged in?';
    }
  }
</script>
</body>
</html>

Book Review: The Age of Surveillance Capitalism

I had extremely high hopes for this book after listening to an interview with the author. Unfortunately, most of what I liked about the book I heard in that 20 minute interview. I believe that the core theme of this book is a topic that will shape the next 30 years of American life. Unfortunately the vast majority of the book is spent detailing (and at times exaggerating) how evil the current system is instead of charting a course for correction without simply stopping progress. With that in mind, I’m going to offer in this review a few things I like and then discuss a few places where she’s gone too far or failed to cover a topic.

I love three aspects of the way the problem is framed:

  1. I think it is brilliant that Zuboff connected Google’s (and then the rest of the Internet’s) switch from a “pay per impression” to “pay per click” model. This means that Google is taking responsibility not only for displaying someone’s ad… but getting the user to click on it. This incentivizes them to be actively matching customers and products. This was the first step on to our present slippery slope.
  2. I love the comparison of being able to commoditize and sell the ability to predict what we will click on to the ability to commoditize and sell labor which began in the industrial revolution. Now, instead of needing 25 cobblers to make enough shoes for my city, I needed 25 laborers to operate the shoe factory. This gave too much power to the capitalists who owned the factory and ended up resulting in unions (and eventually safety and wage related regulations).
  3. Zuboff points out on page 192 that “Demanding Privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the internet is like asking Henry Ford to make each Model T by hand or asking a giraffe to shorten its neck. Such demands are existential threats.” I agree, progress/evolution cannot be simply reversed.

My problem with the book is how evil she claims the surveillance capitalists already are. Though I credit Zuboff with the comparison to the labor market (#2 above) she spends far more time comparing it to the way totalitarian leaders take advantage of their people or the way Europeans took advantage of Native Americans. Zuboff uses hundreds of pages trying to analogize surveillance capitalism to various evil periods in history, leaving very little of the book for the laws, regulations, social protests, etc… that might be necessary to cause a course correction. Perhaps she’s already planning a sequel?

Overall, I think the book is a worthwhile read because of how it brilliantly identifies one of the biggest problems of our time. Even if you will have to glaze over the sections that over-elaborate problem and deal with the disappointment of a lack of resolutions.

Book Review: Who Can You Trust? by Rachel Botsman

I was inspired to read this book by an interview with Botsman on a podcast I listen to. It’s based on a compelling overall narrative… how have we reached a point where no one trusts the President of the United States, but people agree to stay in spare rooms of complete strangers on AirBnB and will literally put their life in the hands of a self-driving car? Botsman came to fame writing about the sharing economy, so it makes sense to hear her perspective on the topic.

The book isn’t as bold as I would have liked in predicting the future. In books like these, I prefer to hear an experts opinion on what is likely to happen, whether it is good or not, and what that means we could/should be aware of. Botsman prefers to focus only on the third of these topics; the dangerous trends we are working towards. That said, she does offer a couple compelling cliffs that we appear to be confidently striding towards:

  1. Not Evaluating Bot’s Motives – Botsman weaves together stories of her daughter interacting with Alexa and scientific studies to show that as people become more comfortable with technology, we are too trustworthy of its “answers”. Her concern is that these bots that we “trust” are not there as a public service, but to influence us in to certain behaviors.
  2. Trusting Technology Before It’s Ready – Botsman also points out that there are a LOT of areas that artificial intelligence and algorithms have not figured out how to do well, but we seem to be willing to trust them without testing them. There are several good academic studies where this behavior is exhibited.
  3. Algorithmic Trust (Blockchain) Can Be Manipulated – Botsman covers the Ethereum DAO hack very well. If you don’t know much about Block Chain you’ll be able to follow it. She points out two concerns with the hack. First, that the hacker didn’t really “break” the system, just found a way to use it that was unintended… imagine if Ethereum had become an international currency before that happened. Second, that a relatively small group of people decided to effectively “undo” the change. In this case they were doing “good”, but it still served to demonstrate that Blockchains can be manipulated.
  4. Reputation Based Trust Getting Out of Hand – The topic of the Social Credit Score in China is covered in more detail than I’ve ever read before and it is terrifying! It’s essentially a credit score system that doesn’t just evaluate your wealth and default history, but you friends, your political leanings, the places you’ve gone, your grades in school, your hobbies, and more. Essentially this will give the government access to incentivize all manner of behavior. The consequences will be real too, “people with low ratings will have slower internet connectivity; restricted access to more desirable night clubs, restaurants, and golf courses. To quote a government document, “allow the trustworthy to roam every where under heaven while making it hard for the discredited to take a single step.” While it is easy to say, that’s just China… we already evaluate people based on the number of followers they have, little blue check marks, star ratings on AirBnB or Uber, etc…

Overall, I recommend the book, but only if you are interested in these topics and would like a little hand book of the current state of them. I also don’t think this book will age well as these topics are evolving quickly. If you happen to come across this blog post after 2019… I doubt I’d still recommend it.