Book Review: Algorithms To Live By

This book has a clever punch line… “What can we as humans learn from computers about how to approach problems that face us in every day life? Get the answer from a psychologist and a computer scientist.” A clever side effect is explaining some of the more interesting concepts in computer science (search, caching, communicating over an imperfect network, etc…) in the lens of every day life. The result is a book that not only offers clever antictdotes on every day life, it also educates us on concepts that effect computers, networks, and computer science.

Some of the interesting points that we can learn from computer science and apply to every day life:

  1. I think their overall point is that there are a great deal of problems that, even with a super computer, cannot be solved unequivocally. The key is to pick an algorithm that, within an appropriate amount of time, has the best chance of giving you the best answer. Once you have confidence you’ve done that, live without regret. The authors quote Bertrand Russell on this, “it would seem we must take account of probability in judging of objective rightness… The objectively right act is the one which will probably be most fortunate. I shall define this as the wisest act.” We can hope to be fortunate- but we should strive to be wise. There are a bunch of examples that fall in to this, but my favorite is:
    • If you are approaching a problem where you have 2 months to find an apartment, you don’t feel you can really compare two apartments without seeing both of them, and you run a very high risk of losing an apartment if you don’t make an offer the minute you see it. Then it is optimal to spend 37% of your time looking but unwilling to commit, and 63% of your time ready to pounce if you find the right place. There’s still only a 37% chance you actually end up in the best place you could have with that… but you will have done the “wise” thing.
  2. There’s a great little lesson on exponential backoff that I loved. Essentially, exponential backoff is a practice of making repeated failures drastically reduce the frequency with which you make an attempt. It’s common in computer science because computers can be so persistent if told to just retry infinitely. My desktop application, for example, may try to communicate with the server and fail. It then retries immediately (with a computer that a millisecond later). If both fail it will wait a whole second (perhaps even doing some other task in the mean time) before trying again. Then wait longer and longer between attempts because each time it fails the next time is less likely. I had never really thought about it, but while exponential backoff is common in computer science, people much more commonly give something a certain number of tries in rapid succession and then give up forever. The authors give this example:
    • “A friend of ours recently mused about a childhood companion who had a disconcerting habit of flaking on social plans. What to do? Deciding once and for all that she’d finally had enough and giving up entirely on the relationship seemed arbitrary and severe, but continuing to persist in perpetual rescheduling seemed naive, liable to leed to an endless amount of disappointment and wasted time. Solution: Exponential Backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of retry goes toward zero-yet you never have to give up entirely” [maybe they were just going through something personal].
  3. They talk about how computationally intensive it is to do a full sort. If you’re a computer nerd… you’ll love that they actually use big O notation to do so!!! While sorting makes searching easier, you have to do a LOT of searching to justify a full sort of a large set. Consequently they recommend a life filled with bucket sorts, where you just put things that are categorically alike together and then search through the categorical subset when you need a specific item. As a slob that treats his inbox as a steady stream… this seems to justify my actions!
  4. There’s discussion of caching and different algorithms that computers use to decide what information to keep “close at hand” and what to file away for slow retrieval if/when needed; then brilliantly analogizes that to organizing your closet, filing cabinet, and house. That also justifies a certain amount of apparent disorganization!
  5. They end up basically justifying agile product/project management by pointing out the level of complexity in predicting things that are years away and have increasingly high numbers of factors influencing them.

As I said though, it’s not all about learning life lessons. If you don’t know much about computers you can expect to learn how the internet communicates messages over unreliable wires (or even thin air), how computers calculate the route between two locations, and the magic of Big O notation.